00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 111 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3289 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.066 Fetching changes from the remote Git repository 00:00:00.069 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.105 Using shallow fetch with depth 1 00:00:00.105 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.105 > git --version # timeout=10 00:00:00.155 > git --version # 'git version 2.39.2' 00:00:00.155 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.198 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.198 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.272 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.284 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.297 Checking out Revision 8d05d9b748dd18cae96eb3802f97dd56ef08e163 (FETCH_HEAD) 00:00:04.297 > git config core.sparsecheckout # timeout=10 00:00:04.308 > git read-tree -mu HEAD # timeout=10 00:00:04.325 > git checkout -f 8d05d9b748dd18cae96eb3802f97dd56ef08e163 # timeout=5 00:00:04.342 Commit message: "jjb/jobs: reduce repetitive accel tests execution" 00:00:04.343 > git rev-list --no-walk 8d05d9b748dd18cae96eb3802f97dd56ef08e163 # timeout=10 00:00:04.429 [Pipeline] Start of Pipeline 00:00:04.444 [Pipeline] library 00:00:04.446 Loading library shm_lib@master 00:00:04.446 Library shm_lib@master is cached. Copying from home. 00:00:04.464 [Pipeline] node 00:00:19.467 Still waiting to schedule task 00:00:19.468 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘GP4’ is offline 00:00:19.468 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM26’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM27’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘WCP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘WCP8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.468 ‘WFP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP27’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP52’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP53’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP65’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘WFP9’ is offline 00:00:19.469 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.469 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:35.435 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:35.437 [Pipeline] { 00:00:35.449 [Pipeline] catchError 00:00:35.451 [Pipeline] { 00:00:35.467 [Pipeline] wrap 00:00:35.477 [Pipeline] { 00:00:35.488 [Pipeline] stage 00:00:35.491 [Pipeline] { (Prologue) 00:00:35.718 [Pipeline] sh 00:00:36.001 + logger -p user.info -t JENKINS-CI 00:00:36.031 [Pipeline] echo 00:00:36.033 Node: GP2 00:00:36.045 [Pipeline] sh 00:00:36.345 [Pipeline] setCustomBuildProperty 00:00:36.360 [Pipeline] echo 00:00:36.361 Cleanup processes 00:00:36.366 [Pipeline] sh 00:00:36.645 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.645 3622560 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.659 [Pipeline] sh 00:00:36.941 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.942 ++ grep -v 'sudo pgrep' 00:00:36.942 ++ awk '{print $1}' 00:00:36.942 + sudo kill -9 00:00:36.942 + true 00:00:36.957 [Pipeline] cleanWs 00:00:36.967 [WS-CLEANUP] Deleting project workspace... 00:00:36.967 [WS-CLEANUP] Deferred wipeout is used... 00:00:36.973 [WS-CLEANUP] done 00:00:36.978 [Pipeline] setCustomBuildProperty 00:00:36.994 [Pipeline] sh 00:00:37.275 + sudo git config --global --replace-all safe.directory '*' 00:00:37.374 [Pipeline] httpRequest 00:00:37.394 [Pipeline] echo 00:00:37.396 Sorcerer 10.211.164.101 is alive 00:00:37.405 [Pipeline] httpRequest 00:00:37.409 HttpMethod: GET 00:00:37.410 URL: http://10.211.164.101/packages/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:37.411 Sending request to url: http://10.211.164.101/packages/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:37.412 Response Code: HTTP/1.1 200 OK 00:00:37.412 Success: Status code 200 is in the accepted range: 200,404 00:00:37.413 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:37.915 [Pipeline] sh 00:00:38.198 + tar --no-same-owner -xf jbp_8d05d9b748dd18cae96eb3802f97dd56ef08e163.tar.gz 00:00:38.214 [Pipeline] httpRequest 00:00:38.235 [Pipeline] echo 00:00:38.237 Sorcerer 10.211.164.101 is alive 00:00:38.250 [Pipeline] httpRequest 00:00:38.256 HttpMethod: GET 00:00:38.256 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:38.257 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:38.258 Response Code: HTTP/1.1 200 OK 00:00:38.258 Success: Status code 200 is in the accepted range: 200,404 00:00:38.259 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:45.771 [Pipeline] sh 00:00:46.055 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:49.350 [Pipeline] sh 00:00:49.635 + git -C spdk log --oneline -n5 00:00:49.635 241d0f3c9 test: fix dpdk builds on ubuntu24 00:00:49.635 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:00:49.635 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:49.635 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:49.635 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:49.655 [Pipeline] withCredentials 00:00:49.666 > git --version # timeout=10 00:00:49.680 > git --version # 'git version 2.39.2' 00:00:49.698 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:49.701 [Pipeline] { 00:00:49.712 [Pipeline] retry 00:00:49.714 [Pipeline] { 00:00:49.734 [Pipeline] sh 00:00:50.018 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:50.600 [Pipeline] } 00:00:50.626 [Pipeline] // retry 00:00:50.631 [Pipeline] } 00:00:50.650 [Pipeline] // withCredentials 00:00:50.658 [Pipeline] httpRequest 00:00:50.674 [Pipeline] echo 00:00:50.676 Sorcerer 10.211.164.101 is alive 00:00:50.684 [Pipeline] httpRequest 00:00:50.688 HttpMethod: GET 00:00:50.689 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.690 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.692 Response Code: HTTP/1.1 200 OK 00:00:50.693 Success: Status code 200 is in the accepted range: 200,404 00:00:50.693 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.928 [Pipeline] sh 00:00:52.212 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:54.138 [Pipeline] sh 00:00:54.423 + git -C dpdk log --oneline -n5 00:00:54.423 caf0f5d395 version: 22.11.4 00:00:54.423 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:54.423 dc9c799c7d vhost: fix missing spinlock unlock 00:00:54.423 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:54.423 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:54.435 [Pipeline] } 00:00:54.453 [Pipeline] // stage 00:00:54.461 [Pipeline] stage 00:00:54.463 [Pipeline] { (Prepare) 00:00:54.484 [Pipeline] writeFile 00:00:54.502 [Pipeline] sh 00:00:54.785 + logger -p user.info -t JENKINS-CI 00:00:54.799 [Pipeline] sh 00:00:55.082 + logger -p user.info -t JENKINS-CI 00:00:55.094 [Pipeline] sh 00:00:55.372 + cat autorun-spdk.conf 00:00:55.372 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.372 SPDK_TEST_NVMF=1 00:00:55.372 SPDK_TEST_NVME_CLI=1 00:00:55.373 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.373 SPDK_TEST_NVMF_NICS=e810 00:00:55.373 SPDK_TEST_VFIOUSER=1 00:00:55.373 SPDK_RUN_UBSAN=1 00:00:55.373 NET_TYPE=phy 00:00:55.373 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:55.373 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.380 RUN_NIGHTLY=1 00:00:55.384 [Pipeline] readFile 00:00:55.410 [Pipeline] withEnv 00:00:55.412 [Pipeline] { 00:00:55.429 [Pipeline] sh 00:00:55.715 + set -ex 00:00:55.715 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:55.715 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.715 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.715 ++ SPDK_TEST_NVMF=1 00:00:55.715 ++ SPDK_TEST_NVME_CLI=1 00:00:55.715 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.715 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.715 ++ SPDK_TEST_VFIOUSER=1 00:00:55.715 ++ SPDK_RUN_UBSAN=1 00:00:55.715 ++ NET_TYPE=phy 00:00:55.715 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:55.715 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.715 ++ RUN_NIGHTLY=1 00:00:55.715 + case $SPDK_TEST_NVMF_NICS in 00:00:55.715 + DRIVERS=ice 00:00:55.715 + [[ tcp == \r\d\m\a ]] 00:00:55.715 + [[ -n ice ]] 00:00:55.715 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:55.715 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:55.715 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:55.715 rmmod: ERROR: Module irdma is not currently loaded 00:00:55.715 rmmod: ERROR: Module i40iw is not currently loaded 00:00:55.715 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:55.715 + true 00:00:55.715 + for D in $DRIVERS 00:00:55.715 + sudo modprobe ice 00:00:55.715 + exit 0 00:00:55.725 [Pipeline] } 00:00:55.744 [Pipeline] // withEnv 00:00:55.750 [Pipeline] } 00:00:55.769 [Pipeline] // stage 00:00:55.781 [Pipeline] catchError 00:00:55.783 [Pipeline] { 00:00:55.801 [Pipeline] timeout 00:00:55.801 Timeout set to expire in 50 min 00:00:55.803 [Pipeline] { 00:00:55.821 [Pipeline] stage 00:00:55.823 [Pipeline] { (Tests) 00:00:55.839 [Pipeline] sh 00:00:56.160 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.160 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.160 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.160 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:56.160 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.160 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.160 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:56.160 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.160 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.160 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.160 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:56.160 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.160 + source /etc/os-release 00:00:56.160 ++ NAME='Fedora Linux' 00:00:56.161 ++ VERSION='38 (Cloud Edition)' 00:00:56.161 ++ ID=fedora 00:00:56.161 ++ VERSION_ID=38 00:00:56.161 ++ VERSION_CODENAME= 00:00:56.161 ++ PLATFORM_ID=platform:f38 00:00:56.161 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:56.161 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:56.161 ++ LOGO=fedora-logo-icon 00:00:56.161 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:56.161 ++ HOME_URL=https://fedoraproject.org/ 00:00:56.161 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:56.161 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:56.161 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:56.161 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:56.161 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:56.161 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:56.161 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:56.161 ++ SUPPORT_END=2024-05-14 00:00:56.161 ++ VARIANT='Cloud Edition' 00:00:56.161 ++ VARIANT_ID=cloud 00:00:56.161 + uname -a 00:00:56.161 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:56.161 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:57.099 Hugepages 00:00:57.099 node hugesize free / total 00:00:57.099 node0 1048576kB 0 / 0 00:00:57.099 node0 2048kB 0 / 0 00:00:57.099 node1 1048576kB 0 / 0 00:00:57.099 node1 2048kB 0 / 0 00:00:57.099 00:00:57.099 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:57.099 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:00:57.099 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:00:57.099 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:00:57.099 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:57.099 + rm -f /tmp/spdk-ld-path 00:00:57.099 + source autorun-spdk.conf 00:00:57.099 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.099 ++ SPDK_TEST_NVMF=1 00:00:57.099 ++ SPDK_TEST_NVME_CLI=1 00:00:57.099 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.099 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.099 ++ SPDK_TEST_VFIOUSER=1 00:00:57.099 ++ SPDK_RUN_UBSAN=1 00:00:57.099 ++ NET_TYPE=phy 00:00:57.099 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:57.099 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.099 ++ RUN_NIGHTLY=1 00:00:57.099 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:57.099 + [[ -n '' ]] 00:00:57.099 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.099 + for M in /var/spdk/build-*-manifest.txt 00:00:57.099 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:57.099 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.099 + for M in /var/spdk/build-*-manifest.txt 00:00:57.099 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:57.099 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.099 ++ uname 00:00:57.099 + [[ Linux == \L\i\n\u\x ]] 00:00:57.099 + sudo dmesg -T 00:00:57.099 + sudo dmesg --clear 00:00:57.099 + dmesg_pid=3623159 00:00:57.099 + [[ Fedora Linux == FreeBSD ]] 00:00:57.099 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.099 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.099 + sudo dmesg -Tw 00:00:57.099 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:57.099 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:57.099 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:57.099 + [[ -x /usr/src/fio-static/fio ]] 00:00:57.099 + export FIO_BIN=/usr/src/fio-static/fio 00:00:57.099 + FIO_BIN=/usr/src/fio-static/fio 00:00:57.099 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:57.099 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:57.099 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:57.099 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.099 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.099 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:57.099 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.099 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.099 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.099 Test configuration: 00:00:57.099 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.099 SPDK_TEST_NVMF=1 00:00:57.099 SPDK_TEST_NVME_CLI=1 00:00:57.099 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.099 SPDK_TEST_NVMF_NICS=e810 00:00:57.099 SPDK_TEST_VFIOUSER=1 00:00:57.099 SPDK_RUN_UBSAN=1 00:00:57.099 NET_TYPE=phy 00:00:57.099 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:57.099 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.099 RUN_NIGHTLY=1 10:21:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:57.099 10:21:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:57.099 10:21:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:57.099 10:21:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:57.099 10:21:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.099 10:21:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.099 10:21:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.099 10:21:45 -- paths/export.sh@5 -- $ export PATH 00:00:57.099 10:21:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.099 10:21:45 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:57.099 10:21:45 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:57.099 10:21:45 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721722905.XXXXXX 00:00:57.099 10:21:45 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721722905.VRijGm 00:00:57.099 10:21:45 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:57.099 10:21:45 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:00:57.099 10:21:45 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.099 10:21:45 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:57.099 10:21:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:57.099 10:21:45 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:57.099 10:21:45 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:57.099 10:21:45 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:57.099 10:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.360 10:21:45 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:57.360 10:21:45 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:00:57.360 10:21:45 -- pm/common@17 -- $ local monitor 00:00:57.360 10:21:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.360 10:21:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.360 10:21:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.360 10:21:45 -- pm/common@21 -- $ date +%s 00:00:57.360 10:21:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.360 10:21:45 -- pm/common@21 -- $ date +%s 00:00:57.360 10:21:45 -- pm/common@25 -- $ sleep 1 00:00:57.360 10:21:45 -- pm/common@21 -- $ date +%s 00:00:57.360 10:21:45 -- pm/common@21 -- $ date +%s 00:00:57.360 10:21:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722905 00:00:57.360 10:21:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722905 00:00:57.360 10:21:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722905 00:00:57.360 10:21:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721722905 00:00:57.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722905_collect-vmstat.pm.log 00:00:57.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722905_collect-cpu-load.pm.log 00:00:57.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722905_collect-cpu-temp.pm.log 00:00:57.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721722905_collect-bmc-pm.bmc.pm.log 00:00:58.303 10:21:46 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:00:58.303 10:21:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:58.303 10:21:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:58.303 10:21:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.303 10:21:46 -- spdk/autobuild.sh@16 -- $ date -u 00:00:58.303 Tue Jul 23 08:21:46 AM UTC 2024 00:00:58.303 10:21:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:58.303 v24.05-15-g241d0f3c9 00:00:58.303 10:21:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:58.303 10:21:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:58.303 10:21:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:58.303 10:21:46 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:58.303 10:21:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:58.303 10:21:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.303 ************************************ 00:00:58.303 START TEST ubsan 00:00:58.303 ************************************ 00:00:58.303 10:21:46 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:58.303 using ubsan 00:00:58.303 00:00:58.303 real 0m0.000s 00:00:58.303 user 0m0.000s 00:00:58.303 sys 0m0.000s 00:00:58.303 10:21:46 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:58.303 10:21:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:58.303 ************************************ 00:00:58.303 END TEST ubsan 00:00:58.303 ************************************ 00:00:58.303 10:21:46 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:58.303 10:21:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:58.303 10:21:46 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:58.303 10:21:46 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:58.303 10:21:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:58.303 10:21:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.303 ************************************ 00:00:58.303 START TEST build_native_dpdk 00:00:58.303 ************************************ 00:00:58.303 10:21:46 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.303 10:21:46 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:58.303 caf0f5d395 version: 22.11.4 00:00:58.303 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:58.303 dc9c799c7d vhost: fix missing spinlock unlock 00:00:58.303 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:58.304 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:58.304 patching file config/rte_config.h 00:00:58.304 Hunk #1 succeeded at 60 (offset 1 line). 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:58.304 10:21:46 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:58.304 patching file lib/pcapng/rte_pcapng.c 00:00:58.304 Hunk #1 succeeded at 110 (offset -18 lines). 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:58.304 10:21:46 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:02.496 The Meson build system 00:01:02.496 Version: 1.3.1 00:01:02.496 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:02.496 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:02.496 Build type: native build 00:01:02.496 Program cat found: YES (/usr/bin/cat) 00:01:02.496 Project name: DPDK 00:01:02.496 Project version: 22.11.4 00:01:02.496 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:02.496 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:02.496 Host machine cpu family: x86_64 00:01:02.496 Host machine cpu: x86_64 00:01:02.496 Message: ## Building in Developer Mode ## 00:01:02.496 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:02.496 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:02.496 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:02.496 Program objdump found: YES (/usr/bin/objdump) 00:01:02.496 Program python3 found: YES (/usr/bin/python3) 00:01:02.496 Program cat found: YES (/usr/bin/cat) 00:01:02.496 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:02.496 Checking for size of "void *" : 8 00:01:02.496 Checking for size of "void *" : 8 (cached) 00:01:02.496 Library m found: YES 00:01:02.496 Library numa found: YES 00:01:02.496 Has header "numaif.h" : YES 00:01:02.496 Library fdt found: NO 00:01:02.496 Library execinfo found: NO 00:01:02.496 Has header "execinfo.h" : YES 00:01:02.496 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:02.496 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:02.496 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:02.496 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:02.496 Run-time dependency openssl found: YES 3.0.9 00:01:02.496 Run-time dependency libpcap found: YES 1.10.4 00:01:02.496 Has header "pcap.h" with dependency libpcap: YES 00:01:02.496 Compiler for C supports arguments -Wcast-qual: YES 00:01:02.496 Compiler for C supports arguments -Wdeprecated: YES 00:01:02.496 Compiler for C supports arguments -Wformat: YES 00:01:02.496 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:02.496 Compiler for C supports arguments -Wformat-security: NO 00:01:02.496 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:02.496 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:02.496 Compiler for C supports arguments -Wnested-externs: YES 00:01:02.496 Compiler for C supports arguments -Wold-style-definition: YES 00:01:02.496 Compiler for C supports arguments -Wpointer-arith: YES 00:01:02.496 Compiler for C supports arguments -Wsign-compare: YES 00:01:02.496 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:02.496 Compiler for C supports arguments -Wundef: YES 00:01:02.496 Compiler for C supports arguments -Wwrite-strings: YES 00:01:02.496 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:02.496 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:02.496 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:02.496 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:02.496 Compiler for C supports arguments -mavx512f: YES 00:01:02.496 Checking if "AVX512 checking" compiles: YES 00:01:02.496 Fetching value of define "__SSE4_2__" : 1 00:01:02.496 Fetching value of define "__AES__" : 1 00:01:02.496 Fetching value of define "__AVX__" : 1 00:01:02.496 Fetching value of define "__AVX2__" : (undefined) 00:01:02.496 Fetching value of define "__AVX512BW__" : (undefined) 00:01:02.496 Fetching value of define "__AVX512CD__" : (undefined) 00:01:02.496 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:02.496 Fetching value of define "__AVX512F__" : (undefined) 00:01:02.496 Fetching value of define "__AVX512VL__" : (undefined) 00:01:02.496 Fetching value of define "__PCLMUL__" : 1 00:01:02.496 Fetching value of define "__RDRND__" : (undefined) 00:01:02.496 Fetching value of define "__RDSEED__" : (undefined) 00:01:02.496 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:02.496 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:02.496 Message: lib/kvargs: Defining dependency "kvargs" 00:01:02.496 Message: lib/telemetry: Defining dependency "telemetry" 00:01:02.496 Checking for function "getentropy" : YES 00:01:02.496 Message: lib/eal: Defining dependency "eal" 00:01:02.496 Message: lib/ring: Defining dependency "ring" 00:01:02.496 Message: lib/rcu: Defining dependency "rcu" 00:01:02.496 Message: lib/mempool: Defining dependency "mempool" 00:01:02.496 Message: lib/mbuf: Defining dependency "mbuf" 00:01:02.496 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:02.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:02.496 Compiler for C supports arguments -mpclmul: YES 00:01:02.496 Compiler for C supports arguments -maes: YES 00:01:02.496 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:02.496 Compiler for C supports arguments -mavx512bw: YES 00:01:02.496 Compiler for C supports arguments -mavx512dq: YES 00:01:02.496 Compiler for C supports arguments -mavx512vl: YES 00:01:02.496 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:02.496 Compiler for C supports arguments -mavx2: YES 00:01:02.496 Compiler for C supports arguments -mavx: YES 00:01:02.496 Message: lib/net: Defining dependency "net" 00:01:02.496 Message: lib/meter: Defining dependency "meter" 00:01:02.496 Message: lib/ethdev: Defining dependency "ethdev" 00:01:02.496 Message: lib/pci: Defining dependency "pci" 00:01:02.496 Message: lib/cmdline: Defining dependency "cmdline" 00:01:02.496 Message: lib/metrics: Defining dependency "metrics" 00:01:02.496 Message: lib/hash: Defining dependency "hash" 00:01:02.496 Message: lib/timer: Defining dependency "timer" 00:01:02.496 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:02.496 Compiler for C supports arguments -mavx2: YES (cached) 00:01:02.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:02.496 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:02.496 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:02.496 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:02.496 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:02.496 Message: lib/acl: Defining dependency "acl" 00:01:02.496 Message: lib/bbdev: Defining dependency "bbdev" 00:01:02.496 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:02.496 Run-time dependency libelf found: YES 0.190 00:01:02.496 Message: lib/bpf: Defining dependency "bpf" 00:01:02.496 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:02.496 Message: lib/compressdev: Defining dependency "compressdev" 00:01:02.496 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:02.496 Message: lib/distributor: Defining dependency "distributor" 00:01:02.496 Message: lib/efd: Defining dependency "efd" 00:01:02.496 Message: lib/eventdev: Defining dependency "eventdev" 00:01:02.497 Message: lib/gpudev: Defining dependency "gpudev" 00:01:02.497 Message: lib/gro: Defining dependency "gro" 00:01:02.497 Message: lib/gso: Defining dependency "gso" 00:01:02.497 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:02.497 Message: lib/jobstats: Defining dependency "jobstats" 00:01:02.497 Message: lib/latencystats: Defining dependency "latencystats" 00:01:02.497 Message: lib/lpm: Defining dependency "lpm" 00:01:02.497 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:02.497 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:02.497 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:02.497 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:02.497 Message: lib/member: Defining dependency "member" 00:01:02.497 Message: lib/pcapng: Defining dependency "pcapng" 00:01:02.497 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:02.497 Message: lib/power: Defining dependency "power" 00:01:02.497 Message: lib/rawdev: Defining dependency "rawdev" 00:01:02.497 Message: lib/regexdev: Defining dependency "regexdev" 00:01:02.497 Message: lib/dmadev: Defining dependency "dmadev" 00:01:02.497 Message: lib/rib: Defining dependency "rib" 00:01:02.497 Message: lib/reorder: Defining dependency "reorder" 00:01:02.497 Message: lib/sched: Defining dependency "sched" 00:01:02.497 Message: lib/security: Defining dependency "security" 00:01:02.497 Message: lib/stack: Defining dependency "stack" 00:01:02.497 Has header "linux/userfaultfd.h" : YES 00:01:02.497 Message: lib/vhost: Defining dependency "vhost" 00:01:02.497 Message: lib/ipsec: Defining dependency "ipsec" 00:01:02.497 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:02.497 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:02.497 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:02.497 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:02.497 Message: lib/fib: Defining dependency "fib" 00:01:02.497 Message: lib/port: Defining dependency "port" 00:01:02.497 Message: lib/pdump: Defining dependency "pdump" 00:01:02.497 Message: lib/table: Defining dependency "table" 00:01:02.497 Message: lib/pipeline: Defining dependency "pipeline" 00:01:02.497 Message: lib/graph: Defining dependency "graph" 00:01:02.497 Message: lib/node: Defining dependency "node" 00:01:02.497 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:02.497 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:02.497 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:02.497 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:02.497 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:02.497 Compiler for C supports arguments -Wno-unused-value: YES 00:01:04.404 Compiler for C supports arguments -Wno-format: YES 00:01:04.404 Compiler for C supports arguments -Wno-format-security: YES 00:01:04.404 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:04.404 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:04.404 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:04.404 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:04.404 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:04.404 Compiler for C supports arguments -mavx2: YES (cached) 00:01:04.404 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.404 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.404 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:04.404 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:04.404 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:04.404 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.404 Configuring doxy-api.conf using configuration 00:01:04.404 Program sphinx-build found: NO 00:01:04.404 Configuring rte_build_config.h using configuration 00:01:04.404 Message: 00:01:04.404 ================= 00:01:04.404 Applications Enabled 00:01:04.404 ================= 00:01:04.404 00:01:04.404 apps: 00:01:04.404 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:04.404 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:04.404 test-security-perf, 00:01:04.404 00:01:04.404 Message: 00:01:04.404 ================= 00:01:04.404 Libraries Enabled 00:01:04.404 ================= 00:01:04.404 00:01:04.404 libs: 00:01:04.404 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:04.404 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:04.404 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:04.404 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:04.404 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:04.405 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:04.405 table, pipeline, graph, node, 00:01:04.405 00:01:04.405 Message: 00:01:04.405 =============== 00:01:04.405 Drivers Enabled 00:01:04.405 =============== 00:01:04.405 00:01:04.405 common: 00:01:04.405 00:01:04.405 bus: 00:01:04.405 pci, vdev, 00:01:04.405 mempool: 00:01:04.405 ring, 00:01:04.405 dma: 00:01:04.405 00:01:04.405 net: 00:01:04.405 i40e, 00:01:04.405 raw: 00:01:04.405 00:01:04.405 crypto: 00:01:04.405 00:01:04.405 compress: 00:01:04.405 00:01:04.405 regex: 00:01:04.405 00:01:04.405 vdpa: 00:01:04.405 00:01:04.405 event: 00:01:04.405 00:01:04.405 baseband: 00:01:04.405 00:01:04.405 gpu: 00:01:04.405 00:01:04.405 00:01:04.405 Message: 00:01:04.405 ================= 00:01:04.405 Content Skipped 00:01:04.405 ================= 00:01:04.405 00:01:04.405 apps: 00:01:04.405 00:01:04.405 libs: 00:01:04.405 kni: explicitly disabled via build config (deprecated lib) 00:01:04.405 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:04.405 00:01:04.405 drivers: 00:01:04.405 common/cpt: not in enabled drivers build config 00:01:04.405 common/dpaax: not in enabled drivers build config 00:01:04.405 common/iavf: not in enabled drivers build config 00:01:04.405 common/idpf: not in enabled drivers build config 00:01:04.405 common/mvep: not in enabled drivers build config 00:01:04.405 common/octeontx: not in enabled drivers build config 00:01:04.405 bus/auxiliary: not in enabled drivers build config 00:01:04.405 bus/dpaa: not in enabled drivers build config 00:01:04.405 bus/fslmc: not in enabled drivers build config 00:01:04.405 bus/ifpga: not in enabled drivers build config 00:01:04.405 bus/vmbus: not in enabled drivers build config 00:01:04.405 common/cnxk: not in enabled drivers build config 00:01:04.405 common/mlx5: not in enabled drivers build config 00:01:04.405 common/qat: not in enabled drivers build config 00:01:04.405 common/sfc_efx: not in enabled drivers build config 00:01:04.405 mempool/bucket: not in enabled drivers build config 00:01:04.405 mempool/cnxk: not in enabled drivers build config 00:01:04.405 mempool/dpaa: not in enabled drivers build config 00:01:04.405 mempool/dpaa2: not in enabled drivers build config 00:01:04.405 mempool/octeontx: not in enabled drivers build config 00:01:04.405 mempool/stack: not in enabled drivers build config 00:01:04.405 dma/cnxk: not in enabled drivers build config 00:01:04.405 dma/dpaa: not in enabled drivers build config 00:01:04.405 dma/dpaa2: not in enabled drivers build config 00:01:04.405 dma/hisilicon: not in enabled drivers build config 00:01:04.405 dma/idxd: not in enabled drivers build config 00:01:04.405 dma/ioat: not in enabled drivers build config 00:01:04.405 dma/skeleton: not in enabled drivers build config 00:01:04.405 net/af_packet: not in enabled drivers build config 00:01:04.405 net/af_xdp: not in enabled drivers build config 00:01:04.405 net/ark: not in enabled drivers build config 00:01:04.405 net/atlantic: not in enabled drivers build config 00:01:04.405 net/avp: not in enabled drivers build config 00:01:04.405 net/axgbe: not in enabled drivers build config 00:01:04.405 net/bnx2x: not in enabled drivers build config 00:01:04.405 net/bnxt: not in enabled drivers build config 00:01:04.405 net/bonding: not in enabled drivers build config 00:01:04.405 net/cnxk: not in enabled drivers build config 00:01:04.405 net/cxgbe: not in enabled drivers build config 00:01:04.405 net/dpaa: not in enabled drivers build config 00:01:04.405 net/dpaa2: not in enabled drivers build config 00:01:04.405 net/e1000: not in enabled drivers build config 00:01:04.405 net/ena: not in enabled drivers build config 00:01:04.405 net/enetc: not in enabled drivers build config 00:01:04.405 net/enetfec: not in enabled drivers build config 00:01:04.405 net/enic: not in enabled drivers build config 00:01:04.405 net/failsafe: not in enabled drivers build config 00:01:04.405 net/fm10k: not in enabled drivers build config 00:01:04.405 net/gve: not in enabled drivers build config 00:01:04.405 net/hinic: not in enabled drivers build config 00:01:04.405 net/hns3: not in enabled drivers build config 00:01:04.405 net/iavf: not in enabled drivers build config 00:01:04.405 net/ice: not in enabled drivers build config 00:01:04.405 net/idpf: not in enabled drivers build config 00:01:04.405 net/igc: not in enabled drivers build config 00:01:04.405 net/ionic: not in enabled drivers build config 00:01:04.405 net/ipn3ke: not in enabled drivers build config 00:01:04.405 net/ixgbe: not in enabled drivers build config 00:01:04.405 net/kni: not in enabled drivers build config 00:01:04.405 net/liquidio: not in enabled drivers build config 00:01:04.405 net/mana: not in enabled drivers build config 00:01:04.405 net/memif: not in enabled drivers build config 00:01:04.405 net/mlx4: not in enabled drivers build config 00:01:04.405 net/mlx5: not in enabled drivers build config 00:01:04.405 net/mvneta: not in enabled drivers build config 00:01:04.405 net/mvpp2: not in enabled drivers build config 00:01:04.405 net/netvsc: not in enabled drivers build config 00:01:04.405 net/nfb: not in enabled drivers build config 00:01:04.405 net/nfp: not in enabled drivers build config 00:01:04.405 net/ngbe: not in enabled drivers build config 00:01:04.405 net/null: not in enabled drivers build config 00:01:04.405 net/octeontx: not in enabled drivers build config 00:01:04.405 net/octeon_ep: not in enabled drivers build config 00:01:04.405 net/pcap: not in enabled drivers build config 00:01:04.405 net/pfe: not in enabled drivers build config 00:01:04.405 net/qede: not in enabled drivers build config 00:01:04.405 net/ring: not in enabled drivers build config 00:01:04.405 net/sfc: not in enabled drivers build config 00:01:04.405 net/softnic: not in enabled drivers build config 00:01:04.405 net/tap: not in enabled drivers build config 00:01:04.405 net/thunderx: not in enabled drivers build config 00:01:04.405 net/txgbe: not in enabled drivers build config 00:01:04.405 net/vdev_netvsc: not in enabled drivers build config 00:01:04.405 net/vhost: not in enabled drivers build config 00:01:04.405 net/virtio: not in enabled drivers build config 00:01:04.405 net/vmxnet3: not in enabled drivers build config 00:01:04.405 raw/cnxk_bphy: not in enabled drivers build config 00:01:04.405 raw/cnxk_gpio: not in enabled drivers build config 00:01:04.405 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:04.405 raw/ifpga: not in enabled drivers build config 00:01:04.405 raw/ntb: not in enabled drivers build config 00:01:04.405 raw/skeleton: not in enabled drivers build config 00:01:04.405 crypto/armv8: not in enabled drivers build config 00:01:04.405 crypto/bcmfs: not in enabled drivers build config 00:01:04.405 crypto/caam_jr: not in enabled drivers build config 00:01:04.405 crypto/ccp: not in enabled drivers build config 00:01:04.405 crypto/cnxk: not in enabled drivers build config 00:01:04.405 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.405 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.405 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.405 crypto/mlx5: not in enabled drivers build config 00:01:04.405 crypto/mvsam: not in enabled drivers build config 00:01:04.405 crypto/nitrox: not in enabled drivers build config 00:01:04.405 crypto/null: not in enabled drivers build config 00:01:04.405 crypto/octeontx: not in enabled drivers build config 00:01:04.405 crypto/openssl: not in enabled drivers build config 00:01:04.405 crypto/scheduler: not in enabled drivers build config 00:01:04.405 crypto/uadk: not in enabled drivers build config 00:01:04.405 crypto/virtio: not in enabled drivers build config 00:01:04.405 compress/isal: not in enabled drivers build config 00:01:04.405 compress/mlx5: not in enabled drivers build config 00:01:04.405 compress/octeontx: not in enabled drivers build config 00:01:04.405 compress/zlib: not in enabled drivers build config 00:01:04.405 regex/mlx5: not in enabled drivers build config 00:01:04.405 regex/cn9k: not in enabled drivers build config 00:01:04.405 vdpa/ifc: not in enabled drivers build config 00:01:04.405 vdpa/mlx5: not in enabled drivers build config 00:01:04.405 vdpa/sfc: not in enabled drivers build config 00:01:04.405 event/cnxk: not in enabled drivers build config 00:01:04.405 event/dlb2: not in enabled drivers build config 00:01:04.405 event/dpaa: not in enabled drivers build config 00:01:04.405 event/dpaa2: not in enabled drivers build config 00:01:04.405 event/dsw: not in enabled drivers build config 00:01:04.405 event/opdl: not in enabled drivers build config 00:01:04.405 event/skeleton: not in enabled drivers build config 00:01:04.405 event/sw: not in enabled drivers build config 00:01:04.405 event/octeontx: not in enabled drivers build config 00:01:04.405 baseband/acc: not in enabled drivers build config 00:01:04.405 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:04.405 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:04.405 baseband/la12xx: not in enabled drivers build config 00:01:04.405 baseband/null: not in enabled drivers build config 00:01:04.405 baseband/turbo_sw: not in enabled drivers build config 00:01:04.405 gpu/cuda: not in enabled drivers build config 00:01:04.405 00:01:04.405 00:01:04.405 Build targets in project: 316 00:01:04.405 00:01:04.405 DPDK 22.11.4 00:01:04.405 00:01:04.405 User defined options 00:01:04.405 libdir : lib 00:01:04.405 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:04.405 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:04.405 c_link_args : 00:01:04.405 enable_docs : false 00:01:04.406 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:04.406 enable_kmods : false 00:01:04.406 machine : native 00:01:04.406 tests : false 00:01:04.406 00:01:04.406 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.406 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:04.406 10:21:52 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 00:01:04.406 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:04.406 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:04.406 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:04.406 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:04.406 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:04.406 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.406 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:04.406 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:04.406 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.406 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.406 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:04.406 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.406 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.406 [13/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.406 [14/745] Linking static target lib/librte_kvargs.a 00:01:04.406 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:04.406 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.406 [17/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.406 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.406 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:04.406 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:04.406 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.406 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:04.406 [23/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:04.406 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:04.406 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:04.406 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:04.406 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:04.406 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:04.670 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:04.670 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:04.670 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:04.670 [32/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.670 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:04.670 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:04.670 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:04.670 [36/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:04.670 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:04.670 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:04.670 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:04.670 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:04.670 [41/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:04.670 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:04.670 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:04.670 [44/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:04.670 [45/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:04.670 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:04.670 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:04.670 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:04.670 [49/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:04.670 [50/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:04.670 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:04.670 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:04.670 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:04.670 [54/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:04.670 [55/745] Generating lib/rte_eal_def with a custom command 00:01:04.670 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:04.670 [57/745] Generating lib/rte_eal_mingw with a custom command 00:01:04.670 [58/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.670 [59/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:04.930 [60/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:04.930 [61/745] Generating lib/rte_ring_mingw with a custom command 00:01:04.930 [62/745] Generating lib/rte_rcu_def with a custom command 00:01:04.931 [63/745] Generating lib/rte_ring_def with a custom command 00:01:04.931 [64/745] Generating lib/rte_rcu_mingw with a custom command 00:01:04.931 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:04.931 [66/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:04.931 [67/745] Linking target lib/librte_kvargs.so.23.0 00:01:04.931 [68/745] Generating lib/rte_mempool_mingw with a custom command 00:01:04.931 [69/745] Generating lib/rte_mempool_def with a custom command 00:01:04.931 [70/745] Generating lib/rte_mbuf_def with a custom command 00:01:04.931 [71/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:04.931 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:04.931 [73/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:04.931 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:04.931 [75/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:04.931 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:04.931 [77/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:04.931 [78/745] Generating lib/rte_net_def with a custom command 00:01:04.931 [79/745] Generating lib/rte_net_mingw with a custom command 00:01:04.931 [80/745] Generating lib/rte_meter_mingw with a custom command 00:01:04.931 [81/745] Generating lib/rte_meter_def with a custom command 00:01:04.931 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:04.931 [83/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:05.190 [84/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:05.190 [85/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:05.190 [86/745] Linking static target lib/librte_ring.a 00:01:05.190 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:05.190 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:05.190 [89/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:05.190 [90/745] Linking static target lib/librte_meter.a 00:01:05.190 [91/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:05.190 [92/745] Linking static target lib/librte_telemetry.a 00:01:05.190 [93/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:05.453 [94/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.453 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:05.725 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:05.725 [97/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.725 [98/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:05.725 [99/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.725 [100/745] Linking target lib/librte_telemetry.so.23.0 00:01:05.985 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:05.985 [102/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:05.985 [103/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:05.985 [104/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:05.985 [105/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:05.985 [106/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:05.985 [107/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:05.985 [108/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:05.985 [109/745] Generating lib/rte_ethdev_def with a custom command 00:01:05.985 [110/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:05.985 [111/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:05.985 [112/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:05.985 [113/745] Generating lib/rte_pci_def with a custom command 00:01:05.985 [114/745] Generating lib/rte_pci_mingw with a custom command 00:01:06.245 [115/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.245 [116/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.245 [117/745] Linking static target lib/librte_pci.a 00:01:06.245 [118/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:06.245 [119/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.245 [120/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:06.245 [121/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.245 [122/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.245 [123/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.245 [124/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:06.245 [125/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.510 [126/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.510 [127/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.510 [128/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:06.510 [129/745] Generating lib/rte_cmdline_def with a custom command 00:01:06.510 [130/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:06.510 [131/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.510 [132/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.510 [133/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.510 [134/745] Generating lib/rte_metrics_def with a custom command 00:01:06.510 [135/745] Generating lib/rte_metrics_mingw with a custom command 00:01:06.510 [136/745] Generating lib/rte_hash_def with a custom command 00:01:06.510 [137/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.510 [138/745] Generating lib/rte_hash_mingw with a custom command 00:01:06.510 [139/745] Generating lib/rte_timer_def with a custom command 00:01:06.510 [140/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.510 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.510 [142/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:06.510 [143/745] Linking static target lib/librte_rcu.a 00:01:06.510 [144/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.510 [145/745] Generating lib/rte_timer_mingw with a custom command 00:01:06.510 [146/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:06.510 [147/745] Linking static target lib/librte_net.a 00:01:06.770 [148/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:06.770 [149/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:06.770 [150/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:06.770 [151/745] Generating lib/rte_acl_def with a custom command 00:01:06.770 [152/745] Generating lib/rte_acl_mingw with a custom command 00:01:06.770 [153/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.770 [154/745] Generating lib/rte_bbdev_def with a custom command 00:01:06.770 [155/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.770 [156/745] Linking static target lib/librte_mempool.a 00:01:06.770 [157/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.770 [158/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.770 [159/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:06.770 [160/745] Generating lib/rte_bitratestats_def with a custom command 00:01:06.770 [161/745] Linking static target lib/librte_eal.a 00:01:06.770 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:06.770 [163/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:07.028 [164/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.028 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.028 [166/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.028 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.290 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.290 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.290 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.290 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.290 [172/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.290 [173/745] Linking static target lib/librte_timer.a 00:01:07.290 [174/745] Linking static target lib/librte_cmdline.a 00:01:07.290 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.290 [176/745] Generating lib/rte_bpf_def with a custom command 00:01:07.555 [177/745] Generating lib/rte_bpf_mingw with a custom command 00:01:07.555 [178/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:07.555 [179/745] Linking static target lib/librte_metrics.a 00:01:07.555 [180/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:07.818 [181/745] Generating lib/rte_cfgfile_def with a custom command 00:01:07.818 [182/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.818 [183/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:07.818 [184/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:07.818 [185/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.818 [186/745] Generating lib/rte_compressdev_def with a custom command 00:01:07.818 [187/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.080 [188/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:08.080 [189/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:08.080 [190/745] Linking static target lib/librte_cfgfile.a 00:01:08.080 [191/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:08.080 [192/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:08.080 [193/745] Linking static target lib/librte_bitratestats.a 00:01:08.080 [194/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:08.080 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:08.080 [196/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.080 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:08.080 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:08.080 [199/745] Generating lib/rte_cryptodev_def with a custom command 00:01:08.080 [200/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:08.340 [201/745] Generating lib/rte_distributor_def with a custom command 00:01:08.340 [202/745] Generating lib/rte_distributor_mingw with a custom command 00:01:08.340 [203/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:08.340 [204/745] Generating lib/rte_efd_mingw with a custom command 00:01:08.340 [205/745] Generating lib/rte_efd_def with a custom command 00:01:08.340 [206/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.605 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.605 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:08.605 [209/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:08.605 [210/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:08.605 [211/745] Linking static target lib/librte_bbdev.a 00:01:08.870 [212/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.870 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:08.870 [214/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:09.132 [215/745] Generating lib/rte_eventdev_def with a custom command 00:01:09.132 [216/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:09.132 [217/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.132 [218/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:09.132 [219/745] Generating lib/rte_gpudev_def with a custom command 00:01:09.132 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:09.398 [221/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:09.398 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.398 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:09.398 [224/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:09.398 [225/745] Generating lib/rte_gro_def with a custom command 00:01:09.398 [226/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.398 [227/745] Linking static target lib/librte_compressdev.a 00:01:09.398 [228/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.398 [229/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:09.662 [230/745] Generating lib/rte_gro_mingw with a custom command 00:01:09.662 [231/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.662 [232/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:09.662 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:09.662 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:09.924 [235/745] Generating lib/rte_gso_def with a custom command 00:01:09.924 [236/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:09.924 [237/745] Linking static target lib/librte_bpf.a 00:01:09.924 [238/745] Generating lib/rte_gso_mingw with a custom command 00:01:10.185 [239/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:10.185 [240/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:10.185 [241/745] Linking static target lib/librte_distributor.a 00:01:10.447 [242/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.447 [243/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:10.710 [244/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.710 [245/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:10.710 [246/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:10.710 [247/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:10.710 [248/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:10.710 [249/745] Generating lib/rte_ip_frag_def with a custom command 00:01:10.710 [250/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:10.710 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:10.710 [252/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:10.710 [253/745] Linking static target lib/librte_gpudev.a 00:01:10.972 [254/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:10.972 [255/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:10.972 [256/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:10.972 [257/745] Generating lib/rte_jobstats_def with a custom command 00:01:10.972 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:10.972 [259/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.972 [260/745] Generating lib/rte_latencystats_def with a custom command 00:01:10.972 [261/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:10.972 [262/745] Generating lib/rte_lpm_def with a custom command 00:01:10.972 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:10.972 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:10.972 [265/745] Generating lib/rte_lpm_mingw with a custom command 00:01:10.972 [266/745] Linking static target lib/librte_gro.a 00:01:10.972 [267/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:10.972 [268/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:10.972 [269/745] Linking static target lib/librte_jobstats.a 00:01:10.972 [270/745] Generating lib/rte_member_def with a custom command 00:01:11.243 [271/745] Generating lib/rte_member_mingw with a custom command 00:01:11.243 [272/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.502 [273/745] Generating lib/rte_pcapng_def with a custom command 00:01:11.502 [274/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:11.503 [275/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:11.503 [276/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.503 [277/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:11.503 [278/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:11.503 [279/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.765 [280/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.765 [281/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:11.765 [282/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.765 [283/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:11.765 [284/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:11.765 [285/745] Linking static target lib/acl/libavx2_tmp.a 00:01:12.029 [286/745] Generating lib/rte_power_def with a custom command 00:01:12.029 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:12.029 [288/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:12.029 [289/745] Generating lib/rte_power_mingw with a custom command 00:01:12.029 [290/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:12.029 [291/745] Generating lib/rte_rawdev_def with a custom command 00:01:12.029 [292/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:12.029 [293/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:12.029 [294/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:12.029 [295/745] Generating lib/rte_regexdev_def with a custom command 00:01:12.029 [296/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:12.029 [297/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.292 [298/745] Generating lib/rte_dmadev_def with a custom command 00:01:12.292 [299/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:12.292 [300/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:12.292 [301/745] Generating lib/rte_rib_mingw with a custom command 00:01:12.292 [302/745] Generating lib/rte_rib_def with a custom command 00:01:12.292 [303/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:12.292 [304/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.292 [305/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:12.292 [306/745] Generating lib/rte_reorder_def with a custom command 00:01:12.292 [307/745] Generating lib/rte_reorder_mingw with a custom command 00:01:12.292 [308/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:12.292 [309/745] Linking static target lib/librte_mbuf.a 00:01:12.292 [310/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.292 [311/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.292 [312/745] Linking static target lib/librte_hash.a 00:01:12.292 [313/745] Linking static target lib/librte_ethdev.a 00:01:12.292 [314/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:12.292 [315/745] Linking static target lib/librte_latencystats.a 00:01:12.292 [316/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:12.292 [317/745] Linking static target lib/acl/libavx512_tmp.a 00:01:12.292 [318/745] Linking static target lib/librte_acl.a 00:01:12.558 [319/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:12.558 [320/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:12.558 [321/745] Linking static target lib/librte_efd.a 00:01:12.558 [322/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:12.558 [323/745] Generating lib/rte_sched_def with a custom command 00:01:12.558 [324/745] Linking static target lib/librte_ip_frag.a 00:01:12.558 [325/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:12.558 [326/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:12.558 [327/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:12.558 [328/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:12.558 [329/745] Linking static target lib/librte_gso.a 00:01:12.558 [330/745] Generating lib/rte_sched_mingw with a custom command 00:01:12.558 [331/745] Generating lib/rte_security_def with a custom command 00:01:12.558 [332/745] Generating lib/rte_security_mingw with a custom command 00:01:12.558 [333/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:12.558 [334/745] Linking static target lib/librte_rawdev.a 00:01:12.840 [335/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.840 [336/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:12.840 [337/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:12.840 [338/745] Generating lib/rte_stack_def with a custom command 00:01:12.840 [339/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:12.840 [340/745] Linking static target lib/librte_stack.a 00:01:12.840 [341/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.840 [342/745] Generating lib/rte_stack_mingw with a custom command 00:01:12.840 [343/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.840 [344/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.840 [345/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.840 [346/745] Linking static target lib/librte_dmadev.a 00:01:12.840 [347/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.102 [348/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.102 [349/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.102 [350/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:13.102 [351/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.102 [352/745] Generating lib/rte_vhost_def with a custom command 00:01:13.102 [353/745] Generating lib/rte_vhost_mingw with a custom command 00:01:13.367 [354/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:13.367 [355/745] Linking static target lib/librte_pcapng.a 00:01:13.367 [356/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.367 [357/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.367 [358/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:13.367 [359/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:13.367 [360/745] Generating lib/rte_ipsec_def with a custom command 00:01:13.628 [361/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:13.628 [362/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:13.628 [363/745] Linking static target lib/librte_regexdev.a 00:01:13.628 [364/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:13.628 [365/745] Linking static target lib/librte_lpm.a 00:01:13.628 [366/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.892 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.892 [368/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:13.892 [369/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:13.892 [370/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.892 [371/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.892 [372/745] Linking static target lib/librte_reorder.a 00:01:14.155 [373/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:14.155 [374/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:14.155 [375/745] Generating lib/rte_fib_def with a custom command 00:01:14.155 [376/745] Generating lib/rte_fib_mingw with a custom command 00:01:14.155 [377/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.155 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:14.155 [379/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:14.155 [380/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:14.419 [381/745] Linking static target lib/librte_power.a 00:01:14.419 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.419 [383/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:14.419 [384/745] Linking static target lib/librte_security.a 00:01:14.419 [385/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:14.681 [386/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.681 [387/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:14.681 [388/745] Linking static target lib/librte_eventdev.a 00:01:14.681 [389/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:14.681 [390/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:14.681 [391/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:14.681 [392/745] Linking static target lib/librte_rib.a 00:01:14.681 [393/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:14.949 [394/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:14.949 [395/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:14.949 [396/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:14.949 [397/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:14.949 [398/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:14.949 [399/745] Generating lib/rte_port_def with a custom command 00:01:14.949 [400/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:14.949 [401/745] Linking static target lib/librte_cryptodev.a 00:01:14.949 [402/745] Generating lib/rte_port_mingw with a custom command 00:01:14.949 [403/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:15.217 [404/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.217 [405/745] Generating lib/rte_pdump_def with a custom command 00:01:15.217 [406/745] Generating lib/rte_pdump_mingw with a custom command 00:01:15.479 [407/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.479 [408/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:15.479 [409/745] Linking static target lib/librte_member.a 00:01:15.479 [410/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:15.742 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:15.742 [412/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.742 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:16.007 [414/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:16.007 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:16.007 [416/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.265 [417/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:16.265 [418/745] Linking static target lib/librte_sched.a 00:01:16.265 [419/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:16.265 [420/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:16.265 [421/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:16.265 [422/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:16.265 [423/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:16.265 [424/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:16.265 [425/745] Linking static target lib/librte_fib.a 00:01:16.841 [426/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:16.841 [427/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:16.841 [428/745] Generating lib/rte_table_def with a custom command 00:01:16.841 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:16.841 [430/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:16.841 [431/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.841 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:16.841 [433/745] Generating lib/rte_table_mingw with a custom command 00:01:16.841 [434/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.841 [435/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:16.841 [436/745] Generating lib/rte_pipeline_def with a custom command 00:01:17.100 [437/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:17.100 [438/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:17.100 [439/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:17.100 [440/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:17.100 [441/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:17.367 [442/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:17.367 [443/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.367 [444/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:17.628 [445/745] Generating lib/rte_graph_def with a custom command 00:01:17.628 [446/745] Generating lib/rte_graph_mingw with a custom command 00:01:17.628 [447/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:17.628 [448/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:17.628 [449/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:17.628 [450/745] Linking static target lib/librte_pdump.a 00:01:17.889 [451/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:17.889 [452/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:17.889 [453/745] Linking static target lib/librte_ipsec.a 00:01:18.152 [454/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:18.152 [455/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.152 [456/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:18.152 [457/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:18.152 [458/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:18.152 [459/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:18.152 [460/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.152 [461/745] Generating lib/rte_node_mingw with a custom command 00:01:18.152 [462/745] Generating lib/rte_node_def with a custom command 00:01:18.415 [463/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:18.416 [464/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:18.416 [465/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.416 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:18.416 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:18.416 [468/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.416 [469/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:18.416 [470/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:18.682 [471/745] Linking target lib/librte_eal.so.23.0 00:01:18.682 [472/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:18.682 [473/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:18.682 [474/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:18.682 [475/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:18.682 [476/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:18.682 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:18.682 [478/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:18.682 [479/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:18.682 [480/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:18.682 [481/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:18.682 [482/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:18.682 [483/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:18.944 [484/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:18.944 [485/745] Linking target lib/librte_ring.so.23.0 00:01:18.944 [486/745] Linking target lib/librte_meter.so.23.0 00:01:18.944 [487/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:18.944 [488/745] Linking target lib/librte_pci.so.23.0 00:01:18.944 [489/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:18.944 [490/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:18.944 [491/745] Linking target lib/librte_timer.so.23.0 00:01:18.944 [492/745] Linking target lib/librte_acl.so.23.0 00:01:18.944 [493/745] Linking target lib/librte_cfgfile.so.23.0 00:01:18.944 [494/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:18.944 [495/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:19.207 [496/745] Linking target lib/librte_jobstats.so.23.0 00:01:19.207 [497/745] Linking target lib/librte_rawdev.so.23.0 00:01:19.207 [498/745] Linking target lib/librte_rcu.so.23.0 00:01:19.207 [499/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:19.207 [500/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:19.207 [501/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:19.207 [502/745] Linking static target lib/librte_table.a 00:01:19.207 [503/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:19.207 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:19.207 [505/745] Linking target lib/librte_mempool.so.23.0 00:01:19.207 [506/745] Linking target lib/librte_dmadev.so.23.0 00:01:19.207 [507/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:19.207 [508/745] Linking target lib/librte_stack.so.23.0 00:01:19.207 [509/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:19.207 [510/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:19.207 [511/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:19.207 [512/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:19.467 [513/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:19.467 [514/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:19.467 [515/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:19.467 [516/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:19.467 [517/745] Linking target lib/librte_rib.so.23.0 00:01:19.467 [518/745] Linking static target lib/librte_port.a 00:01:19.467 [519/745] Linking target lib/librte_mbuf.so.23.0 00:01:19.467 [520/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:19.467 [521/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:19.467 [522/745] Linking static target lib/librte_graph.a 00:01:19.468 [523/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.468 [524/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.468 [525/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.468 [526/745] Linking static target drivers/librte_bus_vdev.a 00:01:19.729 [527/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:19.729 [528/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:19.729 [529/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:19.729 [530/745] Linking target lib/librte_fib.so.23.0 00:01:19.729 [531/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:19.729 [532/745] Linking target lib/librte_net.so.23.0 00:01:19.729 [533/745] Linking target lib/librte_bbdev.so.23.0 00:01:19.729 [534/745] Linking target lib/librte_compressdev.so.23.0 00:01:19.729 [535/745] Linking target lib/librte_cryptodev.so.23.0 00:01:19.729 [536/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:19.991 [537/745] Linking target lib/librte_distributor.so.23.0 00:01:19.991 [538/745] Linking target lib/librte_gpudev.so.23.0 00:01:19.991 [539/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.991 [540/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:19.991 [541/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:19.991 [542/745] Linking target lib/librte_regexdev.so.23.0 00:01:19.991 [543/745] Linking target lib/librte_reorder.so.23.0 00:01:19.991 [544/745] Linking target lib/librte_sched.so.23.0 00:01:19.991 [545/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:19.991 [546/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:19.991 [547/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:19.991 [548/745] Linking target lib/librte_ethdev.so.23.0 00:01:20.255 [549/745] Linking target lib/librte_cmdline.so.23.0 00:01:20.255 [550/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:20.255 [551/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:20.255 [552/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.255 [553/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:20.255 [554/745] Linking target lib/librte_hash.so.23.0 00:01:20.255 [555/745] Linking target lib/librte_security.so.23.0 00:01:20.255 [556/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.255 [557/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.255 [558/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.255 [559/745] Linking static target drivers/librte_bus_pci.a 00:01:20.255 [560/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:20.255 [561/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:20.520 [562/745] Linking target lib/librte_metrics.so.23.0 00:01:20.521 [563/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:20.521 [564/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:20.521 [565/745] Linking target lib/librte_bpf.so.23.0 00:01:20.521 [566/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:20.521 [567/745] Linking target lib/librte_efd.so.23.0 00:01:20.521 [568/745] Linking target lib/librte_gro.so.23.0 00:01:20.521 [569/745] Linking target lib/librte_eventdev.so.23.0 00:01:20.521 [570/745] Linking target lib/librte_gso.so.23.0 00:01:20.521 [571/745] Linking target lib/librte_ip_frag.so.23.0 00:01:20.521 [572/745] Linking target lib/librte_lpm.so.23.0 00:01:20.783 [573/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:20.783 [574/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:20.783 [575/745] Linking target lib/librte_bitratestats.so.23.0 00:01:20.783 [576/745] Linking target lib/librte_latencystats.so.23.0 00:01:20.783 [577/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:20.783 [578/745] Linking target lib/librte_pcapng.so.23.0 00:01:20.783 [579/745] Linking target lib/librte_member.so.23.0 00:01:20.783 [580/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:20.783 [581/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.783 [582/745] Linking target lib/librte_power.so.23.0 00:01:20.783 [583/745] Linking target lib/librte_ipsec.so.23.0 00:01:20.783 [584/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:20.783 [585/745] Linking target lib/librte_graph.so.23.0 00:01:20.783 [586/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.783 [587/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:21.046 [588/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:21.046 [589/745] Linking target lib/librte_port.so.23.0 00:01:21.046 [590/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:21.046 [591/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:21.046 [592/745] Linking target lib/librte_pdump.so.23.0 00:01:21.046 [593/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:21.309 [594/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:21.309 [595/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:21.309 [596/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:21.309 [597/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:21.309 [598/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:21.309 [599/745] Linking target lib/librte_table.so.23.0 00:01:21.573 [600/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:21.573 [601/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:21.573 [602/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:21.573 [603/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:21.573 [604/745] Linking static target drivers/librte_mempool_ring.a 00:01:21.573 [605/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:21.573 [606/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:21.573 [607/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:21.573 [608/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:21.573 [609/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:21.839 [610/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:21.839 [611/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:21.839 [612/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:21.839 [613/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:22.783 [614/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:22.783 [615/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:22.783 [616/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:22.783 [617/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:23.044 [618/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:23.044 [619/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:23.340 [620/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:23.340 [621/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:23.340 [622/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:23.340 [623/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:23.340 [624/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:23.340 [625/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:23.340 [626/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:23.940 [627/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:23.940 [628/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:23.940 [629/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:23.940 [630/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:23.940 [631/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:24.202 [632/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:24.202 [633/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:24.202 [634/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:24.202 [635/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:24.786 [636/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:24.786 [637/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:24.786 [638/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:24.786 [639/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:24.786 [640/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:25.046 [641/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:25.046 [642/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:25.307 [643/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:25.569 [644/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:25.832 [645/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:26.101 [646/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:26.101 [647/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:26.101 [648/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:26.362 [649/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:26.362 [650/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:26.624 [651/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:27.198 [652/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:27.198 [653/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:27.199 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:27.199 [655/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:27.199 [656/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:27.199 [657/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:27.199 [658/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:27.199 [659/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:27.199 [660/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:27.199 [661/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:27.461 [662/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:27.461 [663/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:27.461 [664/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:27.461 [665/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:27.721 [666/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:27.721 [667/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:27.721 [668/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:27.984 [669/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:27.984 [670/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:28.246 [671/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:28.507 [672/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:28.507 [673/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:28.507 [674/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:28.771 [675/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:29.033 [676/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:29.033 [677/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:29.034 [678/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:29.034 [679/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:29.294 [680/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:29.294 [681/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:29.294 [682/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:29.295 [683/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:29.295 [684/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:29.553 [685/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:29.553 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:29.553 [687/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:29.553 [688/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:29.553 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:29.553 [690/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:29.553 [691/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:29.553 [692/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:29.553 [693/745] Linking static target drivers/librte_net_i40e.a 00:01:29.812 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:29.812 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:29.812 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:29.812 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:30.071 [698/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:30.071 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:30.346 [700/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.346 [701/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:30.346 [702/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:30.346 [703/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:30.346 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:30.605 [705/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:30.863 [706/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:30.863 [707/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:30.863 [708/745] Linking static target lib/librte_node.a 00:01:30.863 [709/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:30.863 [710/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:31.122 [711/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.122 [712/745] Linking target lib/librte_node.so.23.0 00:01:31.122 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:31.688 [714/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:31.946 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:32.511 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:33.444 [717/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:33.702 [718/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:34.635 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:41.198 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.311 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.311 [722/745] Linking static target lib/librte_vhost.a 00:02:13.311 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.311 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:23.297 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:23.297 [726/745] Linking static target lib/librte_pipeline.a 00:02:23.297 [727/745] Linking target app/dpdk-proc-info 00:02:23.297 [728/745] Linking target app/dpdk-test-flow-perf 00:02:23.297 [729/745] Linking target app/dpdk-test-eventdev 00:02:23.297 [730/745] Linking target app/dpdk-test-regex 00:02:23.297 [731/745] Linking target app/dpdk-test-gpudev 00:02:23.297 [732/745] Linking target app/dpdk-test-fib 00:02:23.297 [733/745] Linking target app/dpdk-test-cmdline 00:02:23.297 [734/745] Linking target app/dpdk-test-compress-perf 00:02:23.297 [735/745] Linking target app/dpdk-test-sad 00:02:23.297 [736/745] Linking target app/dpdk-pdump 00:02:23.297 [737/745] Linking target app/dpdk-test-acl 00:02:23.297 [738/745] Linking target app/dpdk-test-crypto-perf 00:02:23.297 [739/745] Linking target app/dpdk-testpmd 00:02:23.555 [740/745] Linking target app/dpdk-test-security-perf 00:02:23.555 [741/745] Linking target app/dpdk-dumpcap 00:02:23.555 [742/745] Linking target app/dpdk-test-bbdev 00:02:23.555 [743/745] Linking target app/dpdk-test-pipeline 00:02:25.496 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.496 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:25.496 10:23:13 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 install 00:02:25.496 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:25.496 [0/1] Installing files. 00:02:25.785 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.785 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.786 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.787 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.788 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.789 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.790 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.791 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.791 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.791 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.792 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.362 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.362 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.362 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.362 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.362 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.363 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.364 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.365 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:26.366 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:26.366 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:26.366 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:26.366 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:26.366 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:26.366 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:26.366 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:26.366 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:26.366 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:26.366 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:26.366 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:26.366 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:26.366 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:26.366 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:26.366 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:26.366 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:26.366 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:26.366 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:26.366 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:26.366 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:26.366 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:26.366 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:26.366 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:26.366 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:26.366 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:26.366 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:26.366 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:26.366 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:26.366 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:26.366 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:26.366 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:26.366 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:26.366 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:26.366 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:26.366 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:26.366 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:26.366 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:26.366 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:26.366 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:26.366 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:26.366 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:26.366 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:26.366 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:26.366 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:26.366 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:26.366 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:26.366 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:26.366 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:26.366 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:26.366 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:26.366 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:26.366 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:26.367 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:26.367 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:26.367 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:26.367 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:26.367 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:26.367 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:26.367 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:26.367 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:26.367 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:26.367 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:26.367 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:26.367 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:26.367 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:26.367 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:26.367 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:26.367 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:26.367 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:26.367 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:26.367 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:26.367 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:26.367 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:26.367 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:26.367 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:26.367 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:26.367 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:26.367 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:26.367 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:26.367 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:26.367 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:26.367 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:26.367 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:26.367 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:26.367 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:26.367 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:26.367 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:26.367 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:26.367 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:26.367 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:26.367 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:26.367 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:26.367 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:26.367 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:26.367 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:26.367 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:26.367 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:26.367 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:26.367 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:26.367 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:26.367 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:26.367 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:26.367 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:26.367 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:26.367 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:26.367 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:26.367 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:26.367 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:26.367 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:26.367 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:26.367 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:26.367 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:26.367 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:26.367 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:26.367 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:26.367 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:26.367 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:26.367 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:26.367 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:26.367 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:26.367 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:26.367 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:26.367 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:26.367 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:26.367 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:26.367 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:26.367 10:23:14 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:26.367 10:23:14 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:26.367 10:23:14 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:26.367 10:23:14 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.367 00:02:26.367 real 1m28.114s 00:02:26.367 user 14m14.622s 00:02:26.367 sys 1m40.489s 00:02:26.367 10:23:14 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:26.367 10:23:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:26.367 ************************************ 00:02:26.367 END TEST build_native_dpdk 00:02:26.367 ************************************ 00:02:26.367 10:23:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:26.367 10:23:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:26.367 10:23:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:26.627 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:26.627 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.627 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.627 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:26.886 Using 'verbs' RDMA provider 00:02:37.819 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:47.797 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:48.056 Creating mk/config.mk...done. 00:02:48.056 Creating mk/cc.flags.mk...done. 00:02:48.056 Type 'make' to build. 00:02:48.056 10:23:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:02:48.056 10:23:36 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:48.056 10:23:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:48.056 10:23:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.314 ************************************ 00:02:48.314 START TEST make 00:02:48.314 ************************************ 00:02:48.314 10:23:36 make -- common/autotest_common.sh@1121 -- $ make -j32 00:02:48.574 make[1]: Nothing to be done for 'all'. 00:02:49.967 The Meson build system 00:02:49.967 Version: 1.3.1 00:02:49.967 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:49.967 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.967 Build type: native build 00:02:49.967 Project name: libvfio-user 00:02:49.967 Project version: 0.0.1 00:02:49.967 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:49.967 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:49.967 Host machine cpu family: x86_64 00:02:49.967 Host machine cpu: x86_64 00:02:49.967 Run-time dependency threads found: YES 00:02:49.967 Library dl found: YES 00:02:49.967 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:49.967 Run-time dependency json-c found: YES 0.17 00:02:49.967 Run-time dependency cmocka found: YES 1.1.7 00:02:49.967 Program pytest-3 found: NO 00:02:49.967 Program flake8 found: NO 00:02:49.967 Program misspell-fixer found: NO 00:02:49.967 Program restructuredtext-lint found: NO 00:02:49.967 Program valgrind found: YES (/usr/bin/valgrind) 00:02:49.967 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.967 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.967 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.967 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.967 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:49.967 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:49.968 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.968 Build targets in project: 8 00:02:49.968 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:49.968 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:49.968 00:02:49.968 libvfio-user 0.0.1 00:02:49.968 00:02:49.968 User defined options 00:02:49.968 buildtype : debug 00:02:49.968 default_library: shared 00:02:49.968 libdir : /usr/local/lib 00:02:49.968 00:02:49.968 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.928 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.928 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:50.928 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:50.928 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:50.928 [4/37] Compiling C object samples/null.p/null.c.o 00:02:50.928 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:50.928 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:50.928 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:50.928 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:51.193 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:51.193 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:51.193 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:51.193 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:51.193 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:51.193 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:51.193 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:51.193 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:51.193 [17/37] Compiling C object samples/server.p/server.c.o 00:02:51.193 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:51.193 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:51.193 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:51.193 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:51.193 [22/37] Compiling C object samples/client.p/client.c.o 00:02:51.193 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:51.193 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:51.193 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:51.193 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:51.193 [27/37] Linking target samples/client 00:02:51.193 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:51.456 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:51.456 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:51.456 [31/37] Linking target test/unit_tests 00:02:51.456 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:51.715 [33/37] Linking target samples/null 00:02:51.715 [34/37] Linking target samples/server 00:02:51.715 [35/37] Linking target samples/gpio-pci-idio-16 00:02:51.715 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:51.715 [37/37] Linking target samples/lspci 00:02:51.715 INFO: autodetecting backend as ninja 00:02:51.716 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.716 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.677 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:52.677 ninja: no work to do. 00:03:07.571 CC lib/ut/ut.o 00:03:07.571 CC lib/ut_mock/mock.o 00:03:07.571 CC lib/log/log.o 00:03:07.571 CC lib/log/log_flags.o 00:03:07.571 CC lib/log/log_deprecated.o 00:03:07.571 LIB libspdk_log.a 00:03:07.571 LIB libspdk_ut.a 00:03:07.571 LIB libspdk_ut_mock.a 00:03:07.571 SO libspdk_ut_mock.so.6.0 00:03:07.571 SO libspdk_ut.so.2.0 00:03:07.571 SO libspdk_log.so.7.0 00:03:07.571 SYMLINK libspdk_ut.so 00:03:07.571 SYMLINK libspdk_ut_mock.so 00:03:07.571 SYMLINK libspdk_log.so 00:03:07.571 CXX lib/trace_parser/trace.o 00:03:07.571 CC lib/dma/dma.o 00:03:07.571 CC lib/ioat/ioat.o 00:03:07.571 CC lib/util/base64.o 00:03:07.571 CC lib/util/bit_array.o 00:03:07.571 CC lib/util/cpuset.o 00:03:07.571 CC lib/util/crc16.o 00:03:07.571 CC lib/util/crc32.o 00:03:07.571 CC lib/util/crc32c.o 00:03:07.571 CC lib/util/crc32_ieee.o 00:03:07.571 CC lib/util/crc64.o 00:03:07.571 CC lib/util/dif.o 00:03:07.571 CC lib/util/fd.o 00:03:07.571 CC lib/util/file.o 00:03:07.571 CC lib/util/iov.o 00:03:07.571 CC lib/util/hexlify.o 00:03:07.571 CC lib/util/math.o 00:03:07.571 CC lib/util/pipe.o 00:03:07.571 CC lib/util/strerror_tls.o 00:03:07.571 CC lib/util/string.o 00:03:07.571 CC lib/util/uuid.o 00:03:07.571 CC lib/util/fd_group.o 00:03:07.571 CC lib/util/xor.o 00:03:07.571 CC lib/util/zipf.o 00:03:07.571 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.571 CC lib/vfio_user/host/vfio_user.o 00:03:07.571 LIB libspdk_dma.a 00:03:07.571 SO libspdk_dma.so.4.0 00:03:07.571 LIB libspdk_ioat.a 00:03:07.571 SO libspdk_ioat.so.7.0 00:03:07.571 SYMLINK libspdk_dma.so 00:03:07.571 SYMLINK libspdk_ioat.so 00:03:07.571 LIB libspdk_vfio_user.a 00:03:07.571 SO libspdk_vfio_user.so.5.0 00:03:07.571 SYMLINK libspdk_vfio_user.so 00:03:07.571 LIB libspdk_util.a 00:03:07.571 SO libspdk_util.so.9.0 00:03:07.830 SYMLINK libspdk_util.so 00:03:07.830 CC lib/json/json_parse.o 00:03:07.830 CC lib/vmd/vmd.o 00:03:07.830 CC lib/json/json_util.o 00:03:07.830 CC lib/vmd/led.o 00:03:07.830 CC lib/json/json_write.o 00:03:07.830 CC lib/conf/conf.o 00:03:07.830 CC lib/rdma/common.o 00:03:07.830 CC lib/idxd/idxd.o 00:03:07.830 CC lib/rdma/rdma_verbs.o 00:03:07.830 CC lib/idxd/idxd_user.o 00:03:07.830 CC lib/env_dpdk/env.o 00:03:07.830 CC lib/idxd/idxd_kernel.o 00:03:07.830 CC lib/env_dpdk/memory.o 00:03:07.830 CC lib/env_dpdk/pci.o 00:03:07.830 CC lib/env_dpdk/init.o 00:03:07.830 CC lib/env_dpdk/threads.o 00:03:07.830 CC lib/env_dpdk/pci_ioat.o 00:03:07.830 CC lib/env_dpdk/pci_virtio.o 00:03:07.830 CC lib/env_dpdk/pci_idxd.o 00:03:07.830 CC lib/env_dpdk/pci_vmd.o 00:03:07.830 CC lib/env_dpdk/sigbus_handler.o 00:03:07.830 CC lib/env_dpdk/pci_event.o 00:03:07.830 CC lib/env_dpdk/pci_dpdk.o 00:03:07.830 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.830 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.089 LIB libspdk_trace_parser.a 00:03:08.089 SO libspdk_trace_parser.so.5.0 00:03:08.349 LIB libspdk_conf.a 00:03:08.349 SYMLINK libspdk_trace_parser.so 00:03:08.349 SO libspdk_conf.so.6.0 00:03:08.349 SYMLINK libspdk_conf.so 00:03:08.349 LIB libspdk_rdma.a 00:03:08.349 SO libspdk_rdma.so.6.0 00:03:08.349 LIB libspdk_json.a 00:03:08.349 SO libspdk_json.so.6.0 00:03:08.349 SYMLINK libspdk_rdma.so 00:03:08.608 SYMLINK libspdk_json.so 00:03:08.608 LIB libspdk_idxd.a 00:03:08.608 SO libspdk_idxd.so.12.0 00:03:08.608 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.608 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.608 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.608 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.608 SYMLINK libspdk_idxd.so 00:03:08.608 LIB libspdk_vmd.a 00:03:08.866 SO libspdk_vmd.so.6.0 00:03:08.866 SYMLINK libspdk_vmd.so 00:03:08.866 LIB libspdk_jsonrpc.a 00:03:09.125 SO libspdk_jsonrpc.so.6.0 00:03:09.125 SYMLINK libspdk_jsonrpc.so 00:03:09.125 CC lib/rpc/rpc.o 00:03:09.384 LIB libspdk_rpc.a 00:03:09.643 SO libspdk_rpc.so.6.0 00:03:09.643 SYMLINK libspdk_rpc.so 00:03:09.643 CC lib/notify/notify.o 00:03:09.643 CC lib/notify/notify_rpc.o 00:03:09.643 CC lib/trace/trace.o 00:03:09.643 CC lib/trace/trace_flags.o 00:03:09.643 CC lib/keyring/keyring.o 00:03:09.643 CC lib/trace/trace_rpc.o 00:03:09.643 CC lib/keyring/keyring_rpc.o 00:03:09.902 LIB libspdk_notify.a 00:03:09.902 SO libspdk_notify.so.6.0 00:03:09.902 SYMLINK libspdk_notify.so 00:03:09.902 LIB libspdk_keyring.a 00:03:09.902 LIB libspdk_trace.a 00:03:09.902 SO libspdk_keyring.so.1.0 00:03:10.160 LIB libspdk_env_dpdk.a 00:03:10.160 SO libspdk_trace.so.10.0 00:03:10.160 SYMLINK libspdk_keyring.so 00:03:10.160 SO libspdk_env_dpdk.so.14.0 00:03:10.160 SYMLINK libspdk_trace.so 00:03:10.160 SYMLINK libspdk_env_dpdk.so 00:03:10.160 CC lib/thread/thread.o 00:03:10.160 CC lib/thread/iobuf.o 00:03:10.160 CC lib/sock/sock.o 00:03:10.160 CC lib/sock/sock_rpc.o 00:03:10.727 LIB libspdk_sock.a 00:03:10.727 SO libspdk_sock.so.9.0 00:03:10.727 SYMLINK libspdk_sock.so 00:03:10.985 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.985 CC lib/nvme/nvme_ctrlr.o 00:03:10.985 CC lib/nvme/nvme_fabric.o 00:03:10.985 CC lib/nvme/nvme_ns_cmd.o 00:03:10.985 CC lib/nvme/nvme_ns.o 00:03:10.985 CC lib/nvme/nvme_pcie_common.o 00:03:10.985 CC lib/nvme/nvme_pcie.o 00:03:10.985 CC lib/nvme/nvme_qpair.o 00:03:10.985 CC lib/nvme/nvme.o 00:03:10.985 CC lib/nvme/nvme_quirks.o 00:03:10.985 CC lib/nvme/nvme_transport.o 00:03:10.985 CC lib/nvme/nvme_discovery.o 00:03:10.985 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.985 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.985 CC lib/nvme/nvme_tcp.o 00:03:10.985 CC lib/nvme/nvme_opal.o 00:03:10.985 CC lib/nvme/nvme_poll_group.o 00:03:10.985 CC lib/nvme/nvme_io_msg.o 00:03:10.985 CC lib/nvme/nvme_zns.o 00:03:10.985 CC lib/nvme/nvme_stubs.o 00:03:10.985 CC lib/nvme/nvme_auth.o 00:03:10.985 CC lib/nvme/nvme_cuse.o 00:03:10.985 CC lib/nvme/nvme_vfio_user.o 00:03:10.985 CC lib/nvme/nvme_rdma.o 00:03:12.362 LIB libspdk_thread.a 00:03:12.362 SO libspdk_thread.so.10.0 00:03:12.362 SYMLINK libspdk_thread.so 00:03:12.362 CC lib/blob/blobstore.o 00:03:12.362 CC lib/blob/request.o 00:03:12.362 CC lib/blob/zeroes.o 00:03:12.362 CC lib/blob/blob_bs_dev.o 00:03:12.362 CC lib/init/json_config.o 00:03:12.362 CC lib/init/subsystem.o 00:03:12.362 CC lib/vfu_tgt/tgt_rpc.o 00:03:12.362 CC lib/vfu_tgt/tgt_endpoint.o 00:03:12.362 CC lib/init/subsystem_rpc.o 00:03:12.362 CC lib/init/rpc.o 00:03:12.362 CC lib/virtio/virtio.o 00:03:12.362 CC lib/accel/accel.o 00:03:12.362 CC lib/virtio/virtio_vhost_user.o 00:03:12.362 CC lib/accel/accel_rpc.o 00:03:12.362 CC lib/virtio/virtio_vfio_user.o 00:03:12.362 CC lib/virtio/virtio_pci.o 00:03:12.362 CC lib/accel/accel_sw.o 00:03:12.620 LIB libspdk_init.a 00:03:12.879 SO libspdk_init.so.5.0 00:03:12.879 LIB libspdk_virtio.a 00:03:12.879 SYMLINK libspdk_init.so 00:03:12.879 LIB libspdk_vfu_tgt.a 00:03:12.879 SO libspdk_virtio.so.7.0 00:03:12.879 SO libspdk_vfu_tgt.so.3.0 00:03:12.879 SYMLINK libspdk_vfu_tgt.so 00:03:12.879 SYMLINK libspdk_virtio.so 00:03:12.879 CC lib/event/app.o 00:03:12.879 CC lib/event/reactor.o 00:03:12.879 CC lib/event/log_rpc.o 00:03:12.879 CC lib/event/app_rpc.o 00:03:12.879 CC lib/event/scheduler_static.o 00:03:13.445 LIB libspdk_event.a 00:03:13.445 SO libspdk_event.so.13.0 00:03:13.445 SYMLINK libspdk_event.so 00:03:13.703 LIB libspdk_accel.a 00:03:13.703 LIB libspdk_nvme.a 00:03:13.703 SO libspdk_accel.so.15.0 00:03:13.703 SYMLINK libspdk_accel.so 00:03:13.962 SO libspdk_nvme.so.13.0 00:03:13.962 CC lib/bdev/bdev.o 00:03:13.962 CC lib/bdev/bdev_rpc.o 00:03:13.962 CC lib/bdev/bdev_zone.o 00:03:13.962 CC lib/bdev/part.o 00:03:13.962 CC lib/bdev/scsi_nvme.o 00:03:14.220 SYMLINK libspdk_nvme.so 00:03:15.226 LIB libspdk_blob.a 00:03:15.226 SO libspdk_blob.so.11.0 00:03:15.485 SYMLINK libspdk_blob.so 00:03:15.485 CC lib/lvol/lvol.o 00:03:15.485 CC lib/blobfs/blobfs.o 00:03:15.485 CC lib/blobfs/tree.o 00:03:16.419 LIB libspdk_blobfs.a 00:03:16.419 SO libspdk_blobfs.so.10.0 00:03:16.419 SYMLINK libspdk_blobfs.so 00:03:16.677 LIB libspdk_lvol.a 00:03:16.677 SO libspdk_lvol.so.10.0 00:03:16.677 SYMLINK libspdk_lvol.so 00:03:17.243 LIB libspdk_bdev.a 00:03:17.243 SO libspdk_bdev.so.15.0 00:03:17.243 SYMLINK libspdk_bdev.so 00:03:17.509 CC lib/scsi/dev.o 00:03:17.509 CC lib/ublk/ublk.o 00:03:17.509 CC lib/nvmf/ctrlr.o 00:03:17.509 CC lib/scsi/lun.o 00:03:17.509 CC lib/nbd/nbd.o 00:03:17.509 CC lib/ublk/ublk_rpc.o 00:03:17.509 CC lib/nvmf/ctrlr_discovery.o 00:03:17.509 CC lib/nbd/nbd_rpc.o 00:03:17.509 CC lib/scsi/port.o 00:03:17.509 CC lib/nvmf/ctrlr_bdev.o 00:03:17.509 CC lib/ftl/ftl_core.o 00:03:17.509 CC lib/nvmf/subsystem.o 00:03:17.509 CC lib/ftl/ftl_init.o 00:03:17.509 CC lib/scsi/scsi.o 00:03:17.509 CC lib/scsi/scsi_bdev.o 00:03:17.509 CC lib/nvmf/nvmf.o 00:03:17.509 CC lib/ftl/ftl_layout.o 00:03:17.509 CC lib/nvmf/nvmf_rpc.o 00:03:17.509 CC lib/scsi/scsi_pr.o 00:03:17.509 CC lib/ftl/ftl_debug.o 00:03:17.509 CC lib/scsi/scsi_rpc.o 00:03:17.509 CC lib/nvmf/transport.o 00:03:17.509 CC lib/ftl/ftl_io.o 00:03:17.509 CC lib/scsi/task.o 00:03:17.509 CC lib/nvmf/tcp.o 00:03:17.509 CC lib/ftl/ftl_sb.o 00:03:17.509 CC lib/nvmf/stubs.o 00:03:17.509 CC lib/ftl/ftl_l2p.o 00:03:17.509 CC lib/nvmf/mdns_server.o 00:03:17.509 CC lib/ftl/ftl_l2p_flat.o 00:03:17.509 CC lib/ftl/ftl_nv_cache.o 00:03:17.509 CC lib/nvmf/vfio_user.o 00:03:17.773 CC lib/nvmf/rdma.o 00:03:17.773 CC lib/nvmf/auth.o 00:03:17.773 CC lib/ftl/ftl_band.o 00:03:17.773 CC lib/ftl/ftl_band_ops.o 00:03:17.773 CC lib/ftl/ftl_writer.o 00:03:17.773 CC lib/ftl/ftl_rq.o 00:03:17.773 CC lib/ftl/ftl_reloc.o 00:03:17.773 CC lib/ftl/ftl_l2p_cache.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.039 CC lib/ftl/ftl_p2l.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.039 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.298 LIB libspdk_nbd.a 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.298 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.298 SO libspdk_nbd.so.7.0 00:03:18.298 CC lib/ftl/utils/ftl_conf.o 00:03:18.298 CC lib/ftl/utils/ftl_md.o 00:03:18.560 SYMLINK libspdk_nbd.so 00:03:18.560 CC lib/ftl/utils/ftl_mempool.o 00:03:18.560 LIB libspdk_scsi.a 00:03:18.560 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.560 CC lib/ftl/utils/ftl_property.o 00:03:18.560 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.560 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.560 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.560 SO libspdk_scsi.so.9.0 00:03:18.560 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.560 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.560 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.820 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.820 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.820 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.820 LIB libspdk_ublk.a 00:03:18.820 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.820 CC lib/ftl/base/ftl_base_dev.o 00:03:18.820 SYMLINK libspdk_scsi.so 00:03:18.820 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.820 SO libspdk_ublk.so.3.0 00:03:18.820 CC lib/ftl/ftl_trace.o 00:03:18.820 SYMLINK libspdk_ublk.so 00:03:19.079 CC lib/vhost/vhost.o 00:03:19.079 CC lib/vhost/vhost_rpc.o 00:03:19.079 CC lib/vhost/vhost_scsi.o 00:03:19.079 CC lib/vhost/rte_vhost_user.o 00:03:19.079 CC lib/vhost/vhost_blk.o 00:03:19.079 CC lib/iscsi/conn.o 00:03:19.079 CC lib/iscsi/init_grp.o 00:03:19.079 CC lib/iscsi/iscsi.o 00:03:19.079 CC lib/iscsi/md5.o 00:03:19.079 CC lib/iscsi/param.o 00:03:19.079 CC lib/iscsi/portal_grp.o 00:03:19.079 CC lib/iscsi/tgt_node.o 00:03:19.079 CC lib/iscsi/iscsi_subsystem.o 00:03:19.079 CC lib/iscsi/iscsi_rpc.o 00:03:19.079 CC lib/iscsi/task.o 00:03:19.338 LIB libspdk_ftl.a 00:03:19.596 SO libspdk_ftl.so.9.0 00:03:19.854 SYMLINK libspdk_ftl.so 00:03:20.421 LIB libspdk_vhost.a 00:03:20.421 SO libspdk_vhost.so.8.0 00:03:20.421 LIB libspdk_iscsi.a 00:03:20.679 SYMLINK libspdk_vhost.so 00:03:20.679 SO libspdk_iscsi.so.8.0 00:03:20.679 SYMLINK libspdk_iscsi.so 00:03:20.679 LIB libspdk_nvmf.a 00:03:20.938 SO libspdk_nvmf.so.18.0 00:03:20.938 SYMLINK libspdk_nvmf.so 00:03:21.196 CC module/vfu_device/vfu_virtio.o 00:03:21.196 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.196 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.196 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.196 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.455 CC module/keyring/linux/keyring.o 00:03:21.455 CC module/keyring/file/keyring.o 00:03:21.455 CC module/keyring/linux/keyring_rpc.o 00:03:21.455 CC module/sock/posix/posix.o 00:03:21.455 CC module/keyring/file/keyring_rpc.o 00:03:21.455 CC module/accel/error/accel_error.o 00:03:21.455 CC module/blob/bdev/blob_bdev.o 00:03:21.455 CC module/accel/error/accel_error_rpc.o 00:03:21.455 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.455 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.455 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.455 CC module/accel/dsa/accel_dsa.o 00:03:21.455 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.455 CC module/accel/iaa/accel_iaa.o 00:03:21.455 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.455 CC module/accel/ioat/accel_ioat.o 00:03:21.455 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.455 LIB libspdk_env_dpdk_rpc.a 00:03:21.455 SO libspdk_env_dpdk_rpc.so.6.0 00:03:21.715 LIB libspdk_scheduler_gscheduler.a 00:03:21.715 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.715 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.715 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.715 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:21.715 LIB libspdk_accel_error.a 00:03:21.715 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.715 SO libspdk_accel_error.so.2.0 00:03:21.715 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.715 LIB libspdk_keyring_linux.a 00:03:21.715 LIB libspdk_accel_dsa.a 00:03:21.715 LIB libspdk_keyring_file.a 00:03:21.715 LIB libspdk_accel_ioat.a 00:03:21.715 LIB libspdk_scheduler_dynamic.a 00:03:21.715 SO libspdk_keyring_linux.so.1.0 00:03:21.715 SO libspdk_accel_dsa.so.5.0 00:03:21.715 SO libspdk_scheduler_dynamic.so.4.0 00:03:21.715 SO libspdk_keyring_file.so.1.0 00:03:21.715 SO libspdk_accel_ioat.so.6.0 00:03:21.715 SYMLINK libspdk_accel_error.so 00:03:21.715 LIB libspdk_blob_bdev.a 00:03:21.715 SYMLINK libspdk_keyring_linux.so 00:03:21.715 SYMLINK libspdk_accel_dsa.so 00:03:21.715 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.715 LIB libspdk_accel_iaa.a 00:03:21.715 SYMLINK libspdk_keyring_file.so 00:03:21.715 SYMLINK libspdk_accel_ioat.so 00:03:21.715 SO libspdk_blob_bdev.so.11.0 00:03:21.715 SO libspdk_accel_iaa.so.3.0 00:03:21.973 SYMLINK libspdk_blob_bdev.so 00:03:21.973 SYMLINK libspdk_accel_iaa.so 00:03:22.241 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.241 CC module/bdev/aio/bdev_aio.o 00:03:22.241 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.241 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:22.241 CC module/bdev/gpt/gpt.o 00:03:22.241 CC module/bdev/delay/vbdev_delay.o 00:03:22.241 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.241 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.241 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.241 CC module/bdev/raid/bdev_raid.o 00:03:22.241 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.241 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.241 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.241 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.241 CC module/bdev/ftl/bdev_ftl.o 00:03:22.241 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.241 CC module/bdev/raid/raid0.o 00:03:22.241 CC module/bdev/raid/raid1.o 00:03:22.241 CC module/bdev/raid/concat.o 00:03:22.241 CC module/bdev/split/vbdev_split.o 00:03:22.241 CC module/bdev/split/vbdev_split_rpc.o 00:03:22.241 CC module/bdev/malloc/bdev_malloc.o 00:03:22.241 CC module/bdev/nvme/bdev_nvme.o 00:03:22.241 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.241 CC module/bdev/error/vbdev_error.o 00:03:22.241 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:22.241 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.241 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.241 CC module/bdev/null/bdev_null.o 00:03:22.241 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.241 LIB libspdk_vfu_device.a 00:03:22.500 SO libspdk_vfu_device.so.3.0 00:03:22.500 CC module/bdev/null/bdev_null_rpc.o 00:03:22.500 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.500 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.500 LIB libspdk_sock_posix.a 00:03:22.500 SYMLINK libspdk_vfu_device.so 00:03:22.500 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.500 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.500 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.500 SO libspdk_sock_posix.so.6.0 00:03:22.500 CC module/bdev/nvme/nvme_rpc.o 00:03:22.500 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.500 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.500 CC module/bdev/nvme/vbdev_opal.o 00:03:22.500 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.759 LIB libspdk_bdev_split.a 00:03:22.759 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.759 SYMLINK libspdk_sock_posix.so 00:03:22.759 SO libspdk_bdev_split.so.6.0 00:03:22.759 LIB libspdk_bdev_gpt.a 00:03:22.759 LIB libspdk_bdev_aio.a 00:03:22.759 LIB libspdk_bdev_ftl.a 00:03:22.759 SO libspdk_bdev_gpt.so.6.0 00:03:22.759 LIB libspdk_bdev_zone_block.a 00:03:22.759 SYMLINK libspdk_bdev_split.so 00:03:22.759 LIB libspdk_bdev_null.a 00:03:22.759 SO libspdk_bdev_aio.so.6.0 00:03:22.759 SO libspdk_bdev_ftl.so.6.0 00:03:22.759 LIB libspdk_bdev_error.a 00:03:22.759 SO libspdk_bdev_zone_block.so.6.0 00:03:22.759 SO libspdk_bdev_null.so.6.0 00:03:22.759 LIB libspdk_blobfs_bdev.a 00:03:22.759 SO libspdk_bdev_error.so.6.0 00:03:23.017 SYMLINK libspdk_bdev_gpt.so 00:03:23.017 SYMLINK libspdk_bdev_ftl.so 00:03:23.017 SYMLINK libspdk_bdev_aio.so 00:03:23.017 SO libspdk_blobfs_bdev.so.6.0 00:03:23.017 LIB libspdk_bdev_iscsi.a 00:03:23.017 LIB libspdk_bdev_passthru.a 00:03:23.017 SO libspdk_bdev_iscsi.so.6.0 00:03:23.017 SYMLINK libspdk_bdev_zone_block.so 00:03:23.017 SYMLINK libspdk_bdev_null.so 00:03:23.017 SO libspdk_bdev_passthru.so.6.0 00:03:23.017 SYMLINK libspdk_bdev_error.so 00:03:23.017 LIB libspdk_bdev_delay.a 00:03:23.017 LIB libspdk_bdev_malloc.a 00:03:23.017 SYMLINK libspdk_blobfs_bdev.so 00:03:23.017 SO libspdk_bdev_delay.so.6.0 00:03:23.017 SO libspdk_bdev_malloc.so.6.0 00:03:23.017 SYMLINK libspdk_bdev_iscsi.so 00:03:23.017 SYMLINK libspdk_bdev_passthru.so 00:03:23.017 SYMLINK libspdk_bdev_delay.so 00:03:23.017 SYMLINK libspdk_bdev_malloc.so 00:03:23.017 LIB libspdk_bdev_virtio.a 00:03:23.017 SO libspdk_bdev_virtio.so.6.0 00:03:23.017 LIB libspdk_bdev_lvol.a 00:03:23.017 SYMLINK libspdk_bdev_virtio.so 00:03:23.017 SO libspdk_bdev_lvol.so.6.0 00:03:23.275 SYMLINK libspdk_bdev_lvol.so 00:03:23.535 LIB libspdk_bdev_raid.a 00:03:23.535 SO libspdk_bdev_raid.so.6.0 00:03:23.535 SYMLINK libspdk_bdev_raid.so 00:03:24.476 LIB libspdk_bdev_nvme.a 00:03:24.733 SO libspdk_bdev_nvme.so.7.0 00:03:24.733 SYMLINK libspdk_bdev_nvme.so 00:03:24.990 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.990 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.990 CC module/event/subsystems/keyring/keyring.o 00:03:24.990 CC module/event/subsystems/sock/sock.o 00:03:24.990 CC module/event/subsystems/vmd/vmd.o 00:03:24.990 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:24.990 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.990 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.990 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:25.248 LIB libspdk_event_keyring.a 00:03:25.248 LIB libspdk_event_vhost_blk.a 00:03:25.248 LIB libspdk_event_sock.a 00:03:25.248 LIB libspdk_event_scheduler.a 00:03:25.248 LIB libspdk_event_vfu_tgt.a 00:03:25.248 LIB libspdk_event_vmd.a 00:03:25.248 SO libspdk_event_keyring.so.1.0 00:03:25.248 LIB libspdk_event_iobuf.a 00:03:25.248 SO libspdk_event_vhost_blk.so.3.0 00:03:25.248 SO libspdk_event_sock.so.5.0 00:03:25.248 SO libspdk_event_scheduler.so.4.0 00:03:25.248 SO libspdk_event_vfu_tgt.so.3.0 00:03:25.248 SO libspdk_event_vmd.so.6.0 00:03:25.248 SO libspdk_event_iobuf.so.3.0 00:03:25.248 SYMLINK libspdk_event_keyring.so 00:03:25.248 SYMLINK libspdk_event_vhost_blk.so 00:03:25.248 SYMLINK libspdk_event_sock.so 00:03:25.248 SYMLINK libspdk_event_scheduler.so 00:03:25.248 SYMLINK libspdk_event_vfu_tgt.so 00:03:25.248 SYMLINK libspdk_event_vmd.so 00:03:25.508 SYMLINK libspdk_event_iobuf.so 00:03:25.508 CC module/event/subsystems/accel/accel.o 00:03:25.768 LIB libspdk_event_accel.a 00:03:25.768 SO libspdk_event_accel.so.6.0 00:03:25.768 SYMLINK libspdk_event_accel.so 00:03:26.027 CC module/event/subsystems/bdev/bdev.o 00:03:26.285 LIB libspdk_event_bdev.a 00:03:26.285 SO libspdk_event_bdev.so.6.0 00:03:26.285 SYMLINK libspdk_event_bdev.so 00:03:26.542 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.542 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.542 CC module/event/subsystems/ublk/ublk.o 00:03:26.542 CC module/event/subsystems/scsi/scsi.o 00:03:26.542 CC module/event/subsystems/nbd/nbd.o 00:03:26.801 LIB libspdk_event_nbd.a 00:03:26.801 LIB libspdk_event_ublk.a 00:03:26.801 LIB libspdk_event_scsi.a 00:03:26.801 SO libspdk_event_nbd.so.6.0 00:03:26.801 SO libspdk_event_ublk.so.3.0 00:03:26.801 SO libspdk_event_scsi.so.6.0 00:03:26.801 SYMLINK libspdk_event_nbd.so 00:03:26.801 SYMLINK libspdk_event_ublk.so 00:03:26.801 SYMLINK libspdk_event_scsi.so 00:03:26.801 LIB libspdk_event_nvmf.a 00:03:26.801 SO libspdk_event_nvmf.so.6.0 00:03:26.801 SYMLINK libspdk_event_nvmf.so 00:03:27.059 CC module/event/subsystems/iscsi/iscsi.o 00:03:27.059 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.059 LIB libspdk_event_vhost_scsi.a 00:03:27.059 SO libspdk_event_vhost_scsi.so.3.0 00:03:27.059 LIB libspdk_event_iscsi.a 00:03:27.318 SO libspdk_event_iscsi.so.6.0 00:03:27.318 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.318 SYMLINK libspdk_event_iscsi.so 00:03:27.318 SO libspdk.so.6.0 00:03:27.318 SYMLINK libspdk.so 00:03:27.581 CXX app/trace/trace.o 00:03:27.581 CC app/spdk_top/spdk_top.o 00:03:27.581 CC app/trace_record/trace_record.o 00:03:27.581 CC app/spdk_nvme_perf/perf.o 00:03:27.581 CC app/spdk_nvme_discover/discovery_aer.o 00:03:27.581 CC app/spdk_lspci/spdk_lspci.o 00:03:27.581 CC app/spdk_nvme_identify/identify.o 00:03:27.581 TEST_HEADER include/spdk/accel.h 00:03:27.581 TEST_HEADER include/spdk/accel_module.h 00:03:27.581 TEST_HEADER include/spdk/assert.h 00:03:27.581 TEST_HEADER include/spdk/barrier.h 00:03:27.581 TEST_HEADER include/spdk/base64.h 00:03:27.581 TEST_HEADER include/spdk/bdev.h 00:03:27.581 TEST_HEADER include/spdk/bdev_module.h 00:03:27.581 TEST_HEADER include/spdk/bdev_zone.h 00:03:27.581 TEST_HEADER include/spdk/bit_array.h 00:03:27.581 TEST_HEADER include/spdk/bit_pool.h 00:03:27.581 TEST_HEADER include/spdk/blob_bdev.h 00:03:27.581 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:27.581 TEST_HEADER include/spdk/blobfs.h 00:03:27.581 TEST_HEADER include/spdk/blob.h 00:03:27.581 TEST_HEADER include/spdk/conf.h 00:03:27.581 TEST_HEADER include/spdk/config.h 00:03:27.581 TEST_HEADER include/spdk/cpuset.h 00:03:27.581 TEST_HEADER include/spdk/crc16.h 00:03:27.581 TEST_HEADER include/spdk/crc32.h 00:03:27.844 TEST_HEADER include/spdk/crc64.h 00:03:27.844 CC app/nvmf_tgt/nvmf_main.o 00:03:27.844 TEST_HEADER include/spdk/dif.h 00:03:27.844 TEST_HEADER include/spdk/dma.h 00:03:27.844 TEST_HEADER include/spdk/endian.h 00:03:27.844 TEST_HEADER include/spdk/env_dpdk.h 00:03:27.844 TEST_HEADER include/spdk/env.h 00:03:27.844 TEST_HEADER include/spdk/event.h 00:03:27.844 CC app/vhost/vhost.o 00:03:27.844 TEST_HEADER include/spdk/fd_group.h 00:03:27.844 TEST_HEADER include/spdk/fd.h 00:03:27.844 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.844 TEST_HEADER include/spdk/file.h 00:03:27.844 TEST_HEADER include/spdk/ftl.h 00:03:27.844 TEST_HEADER include/spdk/gpt_spec.h 00:03:27.844 TEST_HEADER include/spdk/hexlify.h 00:03:27.844 TEST_HEADER include/spdk/histogram_data.h 00:03:27.844 TEST_HEADER include/spdk/idxd.h 00:03:27.844 TEST_HEADER include/spdk/idxd_spec.h 00:03:27.844 CC app/spdk_tgt/spdk_tgt.o 00:03:27.844 TEST_HEADER include/spdk/init.h 00:03:27.844 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.844 CC examples/idxd/perf/perf.o 00:03:27.844 CC examples/accel/perf/accel_perf.o 00:03:27.844 TEST_HEADER include/spdk/ioat.h 00:03:27.844 CC examples/sock/hello_world/hello_sock.o 00:03:27.844 CC test/nvme/aer/aer.o 00:03:27.844 TEST_HEADER include/spdk/ioat_spec.h 00:03:27.844 CC examples/ioat/perf/perf.o 00:03:27.844 TEST_HEADER include/spdk/iscsi_spec.h 00:03:27.844 TEST_HEADER include/spdk/json.h 00:03:27.844 CC examples/nvme/hello_world/hello_world.o 00:03:27.844 TEST_HEADER include/spdk/jsonrpc.h 00:03:27.844 TEST_HEADER include/spdk/keyring.h 00:03:27.844 CC examples/util/zipf/zipf.o 00:03:27.844 TEST_HEADER include/spdk/keyring_module.h 00:03:27.844 CC test/event/event_perf/event_perf.o 00:03:27.844 TEST_HEADER include/spdk/likely.h 00:03:27.844 TEST_HEADER include/spdk/log.h 00:03:27.844 TEST_HEADER include/spdk/lvol.h 00:03:27.844 TEST_HEADER include/spdk/memory.h 00:03:27.844 TEST_HEADER include/spdk/mmio.h 00:03:27.844 TEST_HEADER include/spdk/nbd.h 00:03:27.844 TEST_HEADER include/spdk/notify.h 00:03:27.844 TEST_HEADER include/spdk/nvme.h 00:03:27.844 TEST_HEADER include/spdk/nvme_intel.h 00:03:27.844 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:27.844 CC test/bdev/bdevio/bdevio.o 00:03:27.844 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:27.844 TEST_HEADER include/spdk/nvme_spec.h 00:03:27.844 CC examples/blob/hello_world/hello_blob.o 00:03:27.844 CC test/accel/dif/dif.o 00:03:27.844 TEST_HEADER include/spdk/nvme_zns.h 00:03:27.844 CC test/dma/test_dma/test_dma.o 00:03:27.844 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:27.844 CC examples/nvmf/nvmf/nvmf.o 00:03:27.844 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:27.844 CC examples/thread/thread/thread_ex.o 00:03:27.844 TEST_HEADER include/spdk/nvmf.h 00:03:27.844 TEST_HEADER include/spdk/nvmf_spec.h 00:03:27.844 TEST_HEADER include/spdk/nvmf_transport.h 00:03:27.844 TEST_HEADER include/spdk/opal.h 00:03:27.844 CC test/blobfs/mkfs/mkfs.o 00:03:27.844 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.844 TEST_HEADER include/spdk/opal_spec.h 00:03:27.844 CC test/app/bdev_svc/bdev_svc.o 00:03:27.844 TEST_HEADER include/spdk/pci_ids.h 00:03:27.844 TEST_HEADER include/spdk/pipe.h 00:03:27.844 TEST_HEADER include/spdk/queue.h 00:03:27.844 TEST_HEADER include/spdk/reduce.h 00:03:27.844 TEST_HEADER include/spdk/rpc.h 00:03:27.844 TEST_HEADER include/spdk/scheduler.h 00:03:27.844 TEST_HEADER include/spdk/scsi.h 00:03:27.844 TEST_HEADER include/spdk/scsi_spec.h 00:03:27.844 TEST_HEADER include/spdk/sock.h 00:03:27.844 TEST_HEADER include/spdk/stdinc.h 00:03:27.844 CC test/env/mem_callbacks/mem_callbacks.o 00:03:27.844 LINK spdk_lspci 00:03:27.844 TEST_HEADER include/spdk/string.h 00:03:27.844 TEST_HEADER include/spdk/thread.h 00:03:27.844 TEST_HEADER include/spdk/trace.h 00:03:27.844 TEST_HEADER include/spdk/trace_parser.h 00:03:27.844 TEST_HEADER include/spdk/tree.h 00:03:27.844 TEST_HEADER include/spdk/ublk.h 00:03:27.844 TEST_HEADER include/spdk/util.h 00:03:27.844 TEST_HEADER include/spdk/uuid.h 00:03:27.844 TEST_HEADER include/spdk/version.h 00:03:27.844 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:27.844 CC test/lvol/esnap/esnap.o 00:03:27.844 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:27.844 TEST_HEADER include/spdk/vhost.h 00:03:27.844 TEST_HEADER include/spdk/vmd.h 00:03:28.105 TEST_HEADER include/spdk/xor.h 00:03:28.105 TEST_HEADER include/spdk/zipf.h 00:03:28.105 CXX test/cpp_headers/accel.o 00:03:28.105 LINK spdk_nvme_discover 00:03:28.105 LINK lsvmd 00:03:28.105 LINK nvmf_tgt 00:03:28.105 LINK event_perf 00:03:28.105 LINK zipf 00:03:28.105 LINK spdk_trace_record 00:03:28.105 LINK iscsi_tgt 00:03:28.105 LINK vhost 00:03:28.105 LINK spdk_tgt 00:03:28.374 LINK hello_sock 00:03:28.374 LINK ioat_perf 00:03:28.374 LINK hello_world 00:03:28.374 LINK bdev_svc 00:03:28.374 LINK mem_callbacks 00:03:28.374 LINK mkfs 00:03:28.374 LINK hello_blob 00:03:28.374 LINK thread 00:03:28.374 LINK hello_bdev 00:03:28.374 CC examples/nvme/reconnect/reconnect.o 00:03:28.374 LINK aer 00:03:28.374 CXX test/cpp_headers/accel_module.o 00:03:28.374 LINK spdk_trace 00:03:28.374 LINK nvmf 00:03:28.374 LINK idxd_perf 00:03:28.633 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:28.633 CC examples/vmd/led/led.o 00:03:28.633 LINK bdevio 00:03:28.633 LINK test_dma 00:03:28.633 CC test/event/reactor/reactor.o 00:03:28.633 CC test/rpc_client/rpc_client_test.o 00:03:28.633 LINK accel_perf 00:03:28.633 CXX test/cpp_headers/assert.o 00:03:28.633 CC examples/blob/cli/blobcli.o 00:03:28.633 CC test/env/vtophys/vtophys.o 00:03:28.633 CC examples/ioat/verify/verify.o 00:03:28.633 CC test/nvme/reset/reset.o 00:03:28.633 CC test/nvme/sgl/sgl.o 00:03:28.633 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.633 LINK dif 00:03:28.897 CC test/event/reactor_perf/reactor_perf.o 00:03:28.897 LINK led 00:03:28.897 CC test/app/histogram_perf/histogram_perf.o 00:03:28.897 LINK reactor 00:03:28.897 CC test/app/jsoncat/jsoncat.o 00:03:28.897 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:28.897 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.897 CC examples/nvme/hotplug/hotplug.o 00:03:28.897 CC examples/nvme/arbitration/arbitration.o 00:03:28.897 CC test/thread/poller_perf/poller_perf.o 00:03:28.897 CXX test/cpp_headers/barrier.o 00:03:28.897 LINK rpc_client_test 00:03:28.897 LINK vtophys 00:03:28.897 CC test/event/app_repeat/app_repeat.o 00:03:28.897 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.897 CC test/nvme/e2edp/nvme_dp.o 00:03:28.897 LINK reconnect 00:03:29.164 CC test/event/scheduler/scheduler.o 00:03:29.164 LINK reactor_perf 00:03:29.164 LINK spdk_nvme_perf 00:03:29.164 CC test/nvme/overhead/overhead.o 00:03:29.164 CC test/nvme/err_injection/err_injection.o 00:03:29.164 CXX test/cpp_headers/base64.o 00:03:29.164 LINK verify 00:03:29.164 LINK histogram_perf 00:03:29.164 LINK reset 00:03:29.164 LINK jsoncat 00:03:29.164 CC app/spdk_dd/spdk_dd.o 00:03:29.164 LINK env_dpdk_post_init 00:03:29.164 LINK poller_perf 00:03:29.164 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.425 LINK sgl 00:03:29.425 CC test/nvme/startup/startup.o 00:03:29.425 LINK app_repeat 00:03:29.425 LINK interrupt_tgt 00:03:29.425 LINK spdk_nvme_identify 00:03:29.425 LINK hotplug 00:03:29.425 CC test/env/memory/memory_ut.o 00:03:29.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.425 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:29.425 LINK spdk_top 00:03:29.425 LINK nvme_dp 00:03:29.425 LINK nvme_manage 00:03:29.425 CC test/nvme/reserve/reserve.o 00:03:29.425 CC test/app/stub/stub.o 00:03:29.425 CXX test/cpp_headers/bdev.o 00:03:29.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:29.425 LINK scheduler 00:03:29.425 CXX test/cpp_headers/bdev_module.o 00:03:29.689 LINK err_injection 00:03:29.689 CC test/env/pci/pci_ut.o 00:03:29.689 LINK arbitration 00:03:29.689 CC test/nvme/simple_copy/simple_copy.o 00:03:29.689 CC examples/nvme/abort/abort.o 00:03:29.689 LINK nvme_fuzz 00:03:29.689 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:29.689 LINK startup 00:03:29.689 LINK blobcli 00:03:29.689 CXX test/cpp_headers/bdev_zone.o 00:03:29.689 LINK overhead 00:03:29.689 CXX test/cpp_headers/bit_array.o 00:03:29.689 CXX test/cpp_headers/bit_pool.o 00:03:29.689 CXX test/cpp_headers/blob_bdev.o 00:03:29.689 CC test/nvme/connect_stress/connect_stress.o 00:03:29.689 CC app/fio/nvme/fio_plugin.o 00:03:29.689 LINK cmb_copy 00:03:29.689 CC test/nvme/boot_partition/boot_partition.o 00:03:29.954 CC test/nvme/compliance/nvme_compliance.o 00:03:29.954 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.954 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.954 LINK reserve 00:03:29.954 LINK stub 00:03:29.954 LINK spdk_dd 00:03:29.954 CXX test/cpp_headers/blobfs.o 00:03:29.954 CXX test/cpp_headers/blob.o 00:03:29.954 CC app/fio/bdev/fio_plugin.o 00:03:29.954 LINK pmr_persistence 00:03:29.954 CXX test/cpp_headers/conf.o 00:03:29.954 CXX test/cpp_headers/config.o 00:03:29.954 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.954 CC test/nvme/cuse/cuse.o 00:03:29.954 CXX test/cpp_headers/cpuset.o 00:03:30.217 CC test/nvme/fdp/fdp.o 00:03:30.217 CXX test/cpp_headers/crc16.o 00:03:30.217 LINK simple_copy 00:03:30.217 CXX test/cpp_headers/crc32.o 00:03:30.217 CXX test/cpp_headers/crc64.o 00:03:30.217 CXX test/cpp_headers/dif.o 00:03:30.217 LINK boot_partition 00:03:30.217 LINK connect_stress 00:03:30.217 CXX test/cpp_headers/dma.o 00:03:30.217 CXX test/cpp_headers/endian.o 00:03:30.217 CXX test/cpp_headers/env_dpdk.o 00:03:30.217 CXX test/cpp_headers/env.o 00:03:30.217 CXX test/cpp_headers/event.o 00:03:30.217 CXX test/cpp_headers/fd_group.o 00:03:30.217 LINK fused_ordering 00:03:30.217 CXX test/cpp_headers/fd.o 00:03:30.217 CXX test/cpp_headers/file.o 00:03:30.217 CXX test/cpp_headers/ftl.o 00:03:30.217 LINK bdevperf 00:03:30.480 LINK abort 00:03:30.480 LINK vhost_fuzz 00:03:30.480 CXX test/cpp_headers/gpt_spec.o 00:03:30.480 CXX test/cpp_headers/hexlify.o 00:03:30.480 CXX test/cpp_headers/histogram_data.o 00:03:30.480 CXX test/cpp_headers/idxd.o 00:03:30.480 LINK doorbell_aers 00:03:30.480 CXX test/cpp_headers/idxd_spec.o 00:03:30.480 LINK pci_ut 00:03:30.480 CXX test/cpp_headers/init.o 00:03:30.480 LINK nvme_compliance 00:03:30.480 CXX test/cpp_headers/ioat.o 00:03:30.480 CXX test/cpp_headers/ioat_spec.o 00:03:30.480 CXX test/cpp_headers/iscsi_spec.o 00:03:30.480 CXX test/cpp_headers/json.o 00:03:30.480 CXX test/cpp_headers/jsonrpc.o 00:03:30.480 CXX test/cpp_headers/keyring.o 00:03:30.480 CXX test/cpp_headers/keyring_module.o 00:03:30.480 CXX test/cpp_headers/likely.o 00:03:30.480 CXX test/cpp_headers/log.o 00:03:30.480 CXX test/cpp_headers/lvol.o 00:03:30.741 CXX test/cpp_headers/memory.o 00:03:30.741 CXX test/cpp_headers/mmio.o 00:03:30.741 CXX test/cpp_headers/nbd.o 00:03:30.741 CXX test/cpp_headers/notify.o 00:03:30.741 CXX test/cpp_headers/nvme.o 00:03:30.741 CXX test/cpp_headers/nvme_intel.o 00:03:30.741 CXX test/cpp_headers/nvme_ocssd.o 00:03:30.741 LINK memory_ut 00:03:30.741 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:30.741 CXX test/cpp_headers/nvme_spec.o 00:03:30.741 CXX test/cpp_headers/nvme_zns.o 00:03:30.741 CXX test/cpp_headers/nvmf_cmd.o 00:03:30.741 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:30.741 LINK fdp 00:03:30.741 CXX test/cpp_headers/nvmf.o 00:03:30.741 CXX test/cpp_headers/nvmf_spec.o 00:03:30.741 CXX test/cpp_headers/nvmf_transport.o 00:03:30.741 LINK spdk_nvme 00:03:30.741 CXX test/cpp_headers/opal.o 00:03:31.006 CXX test/cpp_headers/opal_spec.o 00:03:31.006 CXX test/cpp_headers/pci_ids.o 00:03:31.006 CXX test/cpp_headers/pipe.o 00:03:31.006 CXX test/cpp_headers/queue.o 00:03:31.006 CXX test/cpp_headers/reduce.o 00:03:31.006 CXX test/cpp_headers/rpc.o 00:03:31.006 CXX test/cpp_headers/scheduler.o 00:03:31.006 CXX test/cpp_headers/scsi.o 00:03:31.006 CXX test/cpp_headers/scsi_spec.o 00:03:31.006 CXX test/cpp_headers/sock.o 00:03:31.006 CXX test/cpp_headers/stdinc.o 00:03:31.006 CXX test/cpp_headers/string.o 00:03:31.006 CXX test/cpp_headers/thread.o 00:03:31.006 CXX test/cpp_headers/trace.o 00:03:31.006 LINK spdk_bdev 00:03:31.006 CXX test/cpp_headers/trace_parser.o 00:03:31.006 CXX test/cpp_headers/tree.o 00:03:31.006 CXX test/cpp_headers/ublk.o 00:03:31.006 CXX test/cpp_headers/util.o 00:03:31.006 CXX test/cpp_headers/uuid.o 00:03:31.006 CXX test/cpp_headers/version.o 00:03:31.268 CXX test/cpp_headers/vfio_user_pci.o 00:03:31.268 CXX test/cpp_headers/vfio_user_spec.o 00:03:31.268 CXX test/cpp_headers/vhost.o 00:03:31.268 CXX test/cpp_headers/vmd.o 00:03:31.268 CXX test/cpp_headers/xor.o 00:03:31.268 CXX test/cpp_headers/zipf.o 00:03:31.834 LINK cuse 00:03:32.092 LINK iscsi_fuzz 00:03:34.618 LINK esnap 00:03:35.184 00:03:35.184 real 0m47.008s 00:03:35.184 user 8m18.631s 00:03:35.184 sys 1m45.186s 00:03:35.184 10:24:23 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:35.184 10:24:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:35.184 ************************************ 00:03:35.184 END TEST make 00:03:35.184 ************************************ 00:03:35.184 10:24:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:35.184 10:24:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:35.184 10:24:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:35.184 10:24:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.184 10:24:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:35.184 10:24:23 -- pm/common@44 -- $ pid=3623196 00:03:35.184 10:24:23 -- pm/common@50 -- $ kill -TERM 3623196 00:03:35.184 10:24:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.184 10:24:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:35.184 10:24:23 -- pm/common@44 -- $ pid=3623198 00:03:35.184 10:24:23 -- pm/common@50 -- $ kill -TERM 3623198 00:03:35.184 10:24:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.184 10:24:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:35.184 10:24:23 -- pm/common@44 -- $ pid=3623200 00:03:35.184 10:24:23 -- pm/common@50 -- $ kill -TERM 3623200 00:03:35.184 10:24:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.184 10:24:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:35.184 10:24:23 -- pm/common@44 -- $ pid=3623228 00:03:35.184 10:24:23 -- pm/common@50 -- $ sudo -E kill -TERM 3623228 00:03:35.443 10:24:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:35.443 10:24:23 -- nvmf/common.sh@7 -- # uname -s 00:03:35.443 10:24:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:35.443 10:24:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:35.443 10:24:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:35.443 10:24:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:35.443 10:24:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:35.443 10:24:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:35.443 10:24:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:35.443 10:24:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:35.443 10:24:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:35.443 10:24:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:35.443 10:24:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:03:35.443 10:24:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:03:35.443 10:24:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:35.443 10:24:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:35.443 10:24:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:35.443 10:24:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:35.443 10:24:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:35.443 10:24:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:35.443 10:24:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.443 10:24:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.443 10:24:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.443 10:24:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.443 10:24:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.443 10:24:23 -- paths/export.sh@5 -- # export PATH 00:03:35.443 10:24:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.443 10:24:23 -- nvmf/common.sh@47 -- # : 0 00:03:35.443 10:24:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:35.443 10:24:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:35.443 10:24:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:35.443 10:24:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:35.443 10:24:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:35.443 10:24:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:35.443 10:24:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:35.443 10:24:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:35.443 10:24:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:35.443 10:24:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:35.443 10:24:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:35.443 10:24:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:35.443 10:24:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:35.443 10:24:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:35.443 10:24:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:35.443 10:24:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:35.443 10:24:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:35.443 10:24:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:35.443 10:24:23 -- spdk/autotest.sh@48 -- # udevadm_pid=3696267 00:03:35.443 10:24:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:35.443 10:24:23 -- pm/common@17 -- # local monitor 00:03:35.443 10:24:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:35.443 10:24:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.443 10:24:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.443 10:24:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.443 10:24:23 -- pm/common@21 -- # date +%s 00:03:35.443 10:24:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.443 10:24:23 -- pm/common@21 -- # date +%s 00:03:35.443 10:24:23 -- pm/common@25 -- # sleep 1 00:03:35.443 10:24:23 -- pm/common@21 -- # date +%s 00:03:35.443 10:24:23 -- pm/common@21 -- # date +%s 00:03:35.443 10:24:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721723063 00:03:35.443 10:24:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721723063 00:03:35.443 10:24:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721723063 00:03:35.443 10:24:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721723063 00:03:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721723063_collect-vmstat.pm.log 00:03:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721723063_collect-cpu-load.pm.log 00:03:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721723063_collect-cpu-temp.pm.log 00:03:35.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721723063_collect-bmc-pm.bmc.pm.log 00:03:36.383 10:24:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.383 10:24:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:36.383 10:24:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:36.383 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.383 10:24:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:36.383 10:24:24 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:36.383 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.383 10:24:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:36.383 10:24:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.383 10:24:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.383 10:24:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:36.383 10:24:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.383 10:24:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:36.383 10:24:24 -- common/autotest_common.sh@1451 -- # uname 00:03:36.383 10:24:24 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:36.383 10:24:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:36.383 10:24:24 -- common/autotest_common.sh@1471 -- # uname 00:03:36.383 10:24:24 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:36.383 10:24:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:36.383 10:24:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:36.383 10:24:24 -- spdk/autotest.sh@72 -- # hash lcov 00:03:36.383 10:24:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:36.383 10:24:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:36.383 --rc lcov_branch_coverage=1 00:03:36.383 --rc lcov_function_coverage=1 00:03:36.383 --rc genhtml_branch_coverage=1 00:03:36.383 --rc genhtml_function_coverage=1 00:03:36.383 --rc genhtml_legend=1 00:03:36.383 --rc geninfo_all_blocks=1 00:03:36.383 ' 00:03:36.383 10:24:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:36.383 --rc lcov_branch_coverage=1 00:03:36.383 --rc lcov_function_coverage=1 00:03:36.383 --rc genhtml_branch_coverage=1 00:03:36.383 --rc genhtml_function_coverage=1 00:03:36.383 --rc genhtml_legend=1 00:03:36.383 --rc geninfo_all_blocks=1 00:03:36.383 ' 00:03:36.383 10:24:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:36.383 --rc lcov_branch_coverage=1 00:03:36.383 --rc lcov_function_coverage=1 00:03:36.383 --rc genhtml_branch_coverage=1 00:03:36.383 --rc genhtml_function_coverage=1 00:03:36.383 --rc genhtml_legend=1 00:03:36.383 --rc geninfo_all_blocks=1 00:03:36.383 --no-external' 00:03:36.383 10:24:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:36.383 --rc lcov_branch_coverage=1 00:03:36.383 --rc lcov_function_coverage=1 00:03:36.383 --rc genhtml_branch_coverage=1 00:03:36.383 --rc genhtml_function_coverage=1 00:03:36.383 --rc genhtml_legend=1 00:03:36.383 --rc geninfo_all_blocks=1 00:03:36.383 --no-external' 00:03:36.383 10:24:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:36.383 lcov: LCOV version 1.14 00:03:36.383 10:24:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:51.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:51.310 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:06.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:06.198 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:06.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:06.199 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:06.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:06.199 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:06.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:06.199 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:06.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:06.199 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:09.479 10:24:57 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:09.479 10:24:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:09.479 10:24:57 -- common/autotest_common.sh@10 -- # set +x 00:04:09.479 10:24:57 -- spdk/autotest.sh@91 -- # rm -f 00:04:09.479 10:24:57 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.858 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:04:10.858 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:04:10.858 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:04:10.858 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:04:10.858 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:04:10.858 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:04:10.858 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:04:10.858 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:04:10.858 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:04:10.858 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:04:10.858 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:04:10.858 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:04:10.858 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:04:10.858 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:04:10.858 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:04:10.858 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:04:10.858 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:04:10.858 10:24:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:10.858 10:24:59 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:10.858 10:24:59 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:10.858 10:24:59 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:10.858 10:24:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:10.858 10:24:59 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:10.858 10:24:59 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:10.858 10:24:59 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.858 10:24:59 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:10.858 10:24:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:10.858 10:24:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.858 10:24:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:10.858 10:24:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:10.858 10:24:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:10.858 10:24:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.858 No valid GPT data, bailing 00:04:10.858 10:24:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.858 10:24:59 -- scripts/common.sh@391 -- # pt= 00:04:10.858 10:24:59 -- scripts/common.sh@392 -- # return 1 00:04:10.858 10:24:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.858 1+0 records in 00:04:10.858 1+0 records out 00:04:10.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00240143 s, 437 MB/s 00:04:10.858 10:24:59 -- spdk/autotest.sh@118 -- # sync 00:04:10.858 10:24:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.858 10:24:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.858 10:24:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.761 10:25:00 -- spdk/autotest.sh@124 -- # uname -s 00:04:12.761 10:25:00 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:12.761 10:25:00 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:12.761 10:25:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.761 10:25:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.761 10:25:00 -- common/autotest_common.sh@10 -- # set +x 00:04:12.761 ************************************ 00:04:12.761 START TEST setup.sh 00:04:12.761 ************************************ 00:04:12.761 10:25:00 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:12.761 * Looking for test storage... 00:04:12.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:12.761 10:25:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:12.761 10:25:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:12.761 10:25:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:12.761 10:25:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.761 10:25:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.761 10:25:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.761 ************************************ 00:04:12.761 START TEST acl 00:04:12.761 ************************************ 00:04:12.761 10:25:00 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:12.761 * Looking for test storage... 00:04:12.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.761 10:25:01 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:12.761 10:25:01 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:12.761 10:25:01 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.762 10:25:01 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.139 10:25:02 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:14.139 10:25:02 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:14.139 10:25:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.139 10:25:02 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:14.139 10:25:02 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.139 10:25:02 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:14.709 Hugepages 00:04:14.709 node hugesize free / total 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 00:04:14.709 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.709 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:14.968 10:25:03 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:14.968 10:25:03 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.968 10:25:03 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.968 10:25:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:14.968 ************************************ 00:04:14.968 START TEST denied 00:04:14.968 ************************************ 00:04:14.968 10:25:03 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:14.969 10:25:03 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:04:14.969 10:25:03 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:14.969 10:25:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.969 10:25:03 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:04:14.969 10:25:03 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.348 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.348 10:25:04 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.884 00:04:18.884 real 0m3.432s 00:04:18.884 user 0m1.045s 00:04:18.884 sys 0m1.626s 00:04:18.884 10:25:06 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.884 10:25:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:18.884 ************************************ 00:04:18.884 END TEST denied 00:04:18.884 ************************************ 00:04:18.884 10:25:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:18.884 10:25:06 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.884 10:25:06 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.884 10:25:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:18.884 ************************************ 00:04:18.884 START TEST allowed 00:04:18.884 ************************************ 00:04:18.884 10:25:06 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:18.884 10:25:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:04:18.884 10:25:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:18.884 10:25:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.884 10:25:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:04:18.884 10:25:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.792 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.792 10:25:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:20.792 10:25:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:20.792 10:25:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:20.792 10:25:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.792 10:25:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.170 00:04:22.170 real 0m3.486s 00:04:22.170 user 0m0.915s 00:04:22.170 sys 0m1.537s 00:04:22.170 10:25:10 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.170 10:25:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:22.170 ************************************ 00:04:22.170 END TEST allowed 00:04:22.170 ************************************ 00:04:22.170 00:04:22.170 real 0m9.379s 00:04:22.170 user 0m2.939s 00:04:22.170 sys 0m4.758s 00:04:22.170 10:25:10 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.170 10:25:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.170 ************************************ 00:04:22.170 END TEST acl 00:04:22.170 ************************************ 00:04:22.171 10:25:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:22.171 10:25:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.171 10:25:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.171 10:25:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.171 ************************************ 00:04:22.171 START TEST hugepages 00:04:22.171 ************************************ 00:04:22.171 10:25:10 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:22.171 * Looking for test storage... 00:04:22.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 30032552 kB' 'MemAvailable: 33968868 kB' 'Buffers: 3900 kB' 'Cached: 15787140 kB' 'SwapCached: 0 kB' 'Active: 12618496 kB' 'Inactive: 3694608 kB' 'Active(anon): 12184536 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524924 kB' 'Mapped: 166976 kB' 'Shmem: 11662472 kB' 'KReclaimable: 419412 kB' 'Slab: 703640 kB' 'SReclaimable: 419412 kB' 'SUnreclaim: 284228 kB' 'KernelStack: 10144 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 13173032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189836 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.171 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.172 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:22.173 10:25:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:22.173 10:25:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.173 10:25:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.173 10:25:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.173 ************************************ 00:04:22.173 START TEST default_setup 00:04:22.173 ************************************ 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.173 10:25:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.112 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:23.112 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:23.112 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:23.370 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:23.370 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:24.311 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32126756 kB' 'MemAvailable: 36062960 kB' 'Buffers: 3900 kB' 'Cached: 15787220 kB' 'SwapCached: 0 kB' 'Active: 12637544 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203584 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544280 kB' 'Mapped: 167264 kB' 'Shmem: 11662552 kB' 'KReclaimable: 419300 kB' 'Slab: 703212 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283912 kB' 'KernelStack: 10320 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13196172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190108 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.311 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.312 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32125740 kB' 'MemAvailable: 36061944 kB' 'Buffers: 3900 kB' 'Cached: 15787224 kB' 'SwapCached: 0 kB' 'Active: 12637300 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203340 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543932 kB' 'Mapped: 167148 kB' 'Shmem: 11662556 kB' 'KReclaimable: 419300 kB' 'Slab: 703284 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283984 kB' 'KernelStack: 10288 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13196192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189996 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.313 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.314 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32129404 kB' 'MemAvailable: 36065608 kB' 'Buffers: 3900 kB' 'Cached: 15787252 kB' 'SwapCached: 0 kB' 'Active: 12637088 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203128 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543716 kB' 'Mapped: 167148 kB' 'Shmem: 11662584 kB' 'KReclaimable: 419300 kB' 'Slab: 703284 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283984 kB' 'KernelStack: 10080 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13193852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189900 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.315 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.316 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.317 nr_hugepages=1024 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.317 resv_hugepages=0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.317 surplus_hugepages=0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.317 anon_hugepages=0 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32129152 kB' 'MemAvailable: 36065356 kB' 'Buffers: 3900 kB' 'Cached: 15787256 kB' 'SwapCached: 0 kB' 'Active: 12637060 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203100 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543716 kB' 'Mapped: 167148 kB' 'Shmem: 11662588 kB' 'KReclaimable: 419300 kB' 'Slab: 703284 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283984 kB' 'KernelStack: 10000 kB' 'PageTables: 7092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13193876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189900 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.317 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.318 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 18137300 kB' 'MemUsed: 14744448 kB' 'SwapCached: 0 kB' 'Active: 8239104 kB' 'Inactive: 3394500 kB' 'Active(anon): 8046556 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11332864 kB' 'Mapped: 94396 kB' 'AnonPages: 303908 kB' 'Shmem: 7745816 kB' 'KernelStack: 5896 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 426208 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.319 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.320 node0=1024 expecting 1024 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.320 00:04:24.320 real 0m2.174s 00:04:24.320 user 0m0.604s 00:04:24.320 sys 0m0.743s 00:04:24.320 10:25:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.321 10:25:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:24.321 ************************************ 00:04:24.321 END TEST default_setup 00:04:24.321 ************************************ 00:04:24.321 10:25:12 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:24.321 10:25:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.321 10:25:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.321 10:25:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.321 ************************************ 00:04:24.321 START TEST per_node_1G_alloc 00:04:24.321 ************************************ 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.321 10:25:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.259 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:25.259 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.259 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:25.259 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:25.259 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:25.259 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:25.259 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:25.259 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:25.259 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:25.259 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:25.259 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:25.259 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:25.259 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:25.259 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:25.259 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:25.259 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:25.259 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.524 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32125304 kB' 'MemAvailable: 36061508 kB' 'Buffers: 3900 kB' 'Cached: 15787336 kB' 'SwapCached: 0 kB' 'Active: 12637028 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203068 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543560 kB' 'Mapped: 167204 kB' 'Shmem: 11662668 kB' 'KReclaimable: 419300 kB' 'Slab: 703304 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284004 kB' 'KernelStack: 10048 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189932 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.525 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32125940 kB' 'MemAvailable: 36062144 kB' 'Buffers: 3900 kB' 'Cached: 15787340 kB' 'SwapCached: 0 kB' 'Active: 12636892 kB' 'Inactive: 3694608 kB' 'Active(anon): 12202932 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543388 kB' 'Mapped: 167140 kB' 'Shmem: 11662672 kB' 'KReclaimable: 419300 kB' 'Slab: 703304 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284004 kB' 'KernelStack: 10064 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189916 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.526 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.527 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.528 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32126260 kB' 'MemAvailable: 36062464 kB' 'Buffers: 3900 kB' 'Cached: 15787356 kB' 'SwapCached: 0 kB' 'Active: 12636756 kB' 'Inactive: 3694608 kB' 'Active(anon): 12202796 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543228 kB' 'Mapped: 167140 kB' 'Shmem: 11662688 kB' 'KReclaimable: 419300 kB' 'Slab: 703364 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284064 kB' 'KernelStack: 10032 kB' 'PageTables: 7380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189916 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.529 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.530 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.531 nr_hugepages=1024 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.531 resv_hugepages=0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.531 surplus_hugepages=0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.531 anon_hugepages=0 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32124748 kB' 'MemAvailable: 36060952 kB' 'Buffers: 3900 kB' 'Cached: 15787380 kB' 'SwapCached: 0 kB' 'Active: 12636812 kB' 'Inactive: 3694608 kB' 'Active(anon): 12202852 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543300 kB' 'Mapped: 167140 kB' 'Shmem: 11662712 kB' 'KReclaimable: 419300 kB' 'Slab: 703364 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284064 kB' 'KernelStack: 10064 kB' 'PageTables: 7468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189900 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.531 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19196692 kB' 'MemUsed: 13685056 kB' 'SwapCached: 0 kB' 'Active: 8239456 kB' 'Inactive: 3394500 kB' 'Active(anon): 8046908 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11332916 kB' 'Mapped: 94404 kB' 'AnonPages: 304224 kB' 'Shmem: 7745868 kB' 'KernelStack: 5912 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 426164 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12926044 kB' 'MemUsed: 6483388 kB' 'SwapCached: 0 kB' 'Active: 4397332 kB' 'Inactive: 300108 kB' 'Active(anon): 4155920 kB' 'Inactive(anon): 0 kB' 'Active(file): 241412 kB' 'Inactive(file): 300108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4458388 kB' 'Mapped: 72736 kB' 'AnonPages: 239064 kB' 'Shmem: 3916868 kB' 'KernelStack: 4152 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 277200 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 137520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.536 node0=512 expecting 512 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:25.536 node1=512 expecting 512 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.536 00:04:25.536 real 0m1.227s 00:04:25.536 user 0m0.552s 00:04:25.536 sys 0m0.708s 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.536 10:25:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.536 ************************************ 00:04:25.536 END TEST per_node_1G_alloc 00:04:25.536 ************************************ 00:04:25.536 10:25:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:25.536 10:25:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.536 10:25:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.536 10:25:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.536 ************************************ 00:04:25.536 START TEST even_2G_alloc 00:04:25.536 ************************************ 00:04:25.536 10:25:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.795 10:25:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.739 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:26.739 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.739 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:26.739 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:26.739 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:26.739 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:26.739 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:26.739 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:26.739 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:26.739 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:26.739 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:26.739 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:26.739 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:26.739 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:26.739 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:26.739 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:26.739 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.739 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32133496 kB' 'MemAvailable: 36069700 kB' 'Buffers: 3900 kB' 'Cached: 15787464 kB' 'SwapCached: 0 kB' 'Active: 12638284 kB' 'Inactive: 3694608 kB' 'Active(anon): 12204324 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544296 kB' 'Mapped: 167264 kB' 'Shmem: 11662796 kB' 'KReclaimable: 419300 kB' 'Slab: 703308 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284008 kB' 'KernelStack: 10080 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13207276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189948 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.740 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32133284 kB' 'MemAvailable: 36069488 kB' 'Buffers: 3900 kB' 'Cached: 15787464 kB' 'SwapCached: 0 kB' 'Active: 12637768 kB' 'Inactive: 3694608 kB' 'Active(anon): 12203808 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543764 kB' 'Mapped: 167264 kB' 'Shmem: 11662796 kB' 'KReclaimable: 419300 kB' 'Slab: 703308 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284008 kB' 'KernelStack: 10048 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189884 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.741 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.742 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32132528 kB' 'MemAvailable: 36068732 kB' 'Buffers: 3900 kB' 'Cached: 15787484 kB' 'SwapCached: 0 kB' 'Active: 12636784 kB' 'Inactive: 3694608 kB' 'Active(anon): 12202824 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543208 kB' 'Mapped: 167264 kB' 'Shmem: 11662816 kB' 'KReclaimable: 419300 kB' 'Slab: 703300 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 284000 kB' 'KernelStack: 10032 kB' 'PageTables: 7348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189868 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.743 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.744 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.745 nr_hugepages=1024 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.745 resv_hugepages=0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.745 surplus_hugepages=0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.745 anon_hugepages=0 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.745 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32132528 kB' 'MemAvailable: 36068732 kB' 'Buffers: 3900 kB' 'Cached: 15787512 kB' 'SwapCached: 0 kB' 'Active: 12636956 kB' 'Inactive: 3694608 kB' 'Active(anon): 12202996 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543416 kB' 'Mapped: 167116 kB' 'Shmem: 11662844 kB' 'KReclaimable: 419300 kB' 'Slab: 703296 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283996 kB' 'KernelStack: 10064 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13194540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189884 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.746 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19199364 kB' 'MemUsed: 13682384 kB' 'SwapCached: 0 kB' 'Active: 8239780 kB' 'Inactive: 3394500 kB' 'Active(anon): 8047232 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11333000 kB' 'Mapped: 94416 kB' 'AnonPages: 304476 kB' 'Shmem: 7745952 kB' 'KernelStack: 5880 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 426136 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.747 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.748 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12933416 kB' 'MemUsed: 6476016 kB' 'SwapCached: 0 kB' 'Active: 4397156 kB' 'Inactive: 300108 kB' 'Active(anon): 4155744 kB' 'Inactive(anon): 0 kB' 'Active(file): 241412 kB' 'Inactive(file): 300108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4458428 kB' 'Mapped: 72700 kB' 'AnonPages: 238904 kB' 'Shmem: 3916908 kB' 'KernelStack: 4168 kB' 'PageTables: 3508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 277160 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 137480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.749 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.750 node0=512 expecting 512 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.750 node1=512 expecting 512 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.750 00:04:26.750 real 0m1.188s 00:04:26.750 user 0m0.531s 00:04:26.750 sys 0m0.686s 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.750 10:25:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.750 ************************************ 00:04:26.750 END TEST even_2G_alloc 00:04:26.750 ************************************ 00:04:26.750 10:25:15 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:26.750 10:25:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.750 10:25:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.009 10:25:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 ************************************ 00:04:27.009 START TEST odd_alloc 00:04:27.009 ************************************ 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.009 10:25:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.952 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:27.952 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.952 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:27.952 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:27.952 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:27.952 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:27.952 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:27.952 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:27.952 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:27.952 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:27.952 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:27.952 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:27.952 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:27.952 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:27.952 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:27.952 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:27.952 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32138232 kB' 'MemAvailable: 36074436 kB' 'Buffers: 3900 kB' 'Cached: 15787600 kB' 'SwapCached: 0 kB' 'Active: 12633548 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199588 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539820 kB' 'Mapped: 166048 kB' 'Shmem: 11662932 kB' 'KReclaimable: 419300 kB' 'Slab: 703236 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283936 kB' 'KernelStack: 10016 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 13179552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189868 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.952 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.953 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32144956 kB' 'MemAvailable: 36081160 kB' 'Buffers: 3900 kB' 'Cached: 15787604 kB' 'SwapCached: 0 kB' 'Active: 12633140 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199180 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539456 kB' 'Mapped: 166104 kB' 'Shmem: 11662936 kB' 'KReclaimable: 419300 kB' 'Slab: 703240 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283940 kB' 'KernelStack: 10048 kB' 'PageTables: 7236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 13180324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.954 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32144104 kB' 'MemAvailable: 36080308 kB' 'Buffers: 3900 kB' 'Cached: 15787620 kB' 'SwapCached: 0 kB' 'Active: 12633864 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199904 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540188 kB' 'Mapped: 166088 kB' 'Shmem: 11662952 kB' 'KReclaimable: 419300 kB' 'Slab: 703244 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283944 kB' 'KernelStack: 10032 kB' 'PageTables: 7200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 13179592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189756 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.955 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.956 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.957 nr_hugepages=1025 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.957 resv_hugepages=0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.957 surplus_hugepages=0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.957 anon_hugepages=0 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.957 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32144104 kB' 'MemAvailable: 36080308 kB' 'Buffers: 3900 kB' 'Cached: 15787640 kB' 'SwapCached: 0 kB' 'Active: 12633436 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199476 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539776 kB' 'Mapped: 166028 kB' 'Shmem: 11662972 kB' 'KReclaimable: 419300 kB' 'Slab: 703236 kB' 'SReclaimable: 419300 kB' 'SUnreclaim: 283936 kB' 'KernelStack: 10016 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 13179612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189756 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.958 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19201828 kB' 'MemUsed: 13679920 kB' 'SwapCached: 0 kB' 'Active: 8237172 kB' 'Inactive: 3394500 kB' 'Active(anon): 8044624 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11333068 kB' 'Mapped: 93928 kB' 'AnonPages: 301736 kB' 'Shmem: 7746020 kB' 'KernelStack: 5864 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 426092 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.959 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.960 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.221 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12942668 kB' 'MemUsed: 6466764 kB' 'SwapCached: 0 kB' 'Active: 4396260 kB' 'Inactive: 300108 kB' 'Active(anon): 4154848 kB' 'Inactive(anon): 0 kB' 'Active(file): 241412 kB' 'Inactive(file): 300108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4458516 kB' 'Mapped: 72100 kB' 'AnonPages: 238048 kB' 'Shmem: 3916996 kB' 'KernelStack: 4152 kB' 'PageTables: 3260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 277144 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 137464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.222 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:28.223 node0=512 expecting 513 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:28.223 node1=513 expecting 512 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:28.223 00:04:28.223 real 0m1.219s 00:04:28.223 user 0m0.541s 00:04:28.223 sys 0m0.711s 00:04:28.223 10:25:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.224 10:25:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.224 ************************************ 00:04:28.224 END TEST odd_alloc 00:04:28.224 ************************************ 00:04:28.224 10:25:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:28.224 10:25:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.224 10:25:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.224 10:25:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.224 ************************************ 00:04:28.224 START TEST custom_alloc 00:04:28.224 ************************************ 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.224 10:25:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.169 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:29.169 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.169 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:29.169 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:29.169 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:29.169 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:29.169 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:29.169 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:29.169 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:29.169 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:29.169 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:29.169 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:29.169 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:29.169 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:29.169 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:29.169 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:29.169 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31082616 kB' 'MemAvailable: 35018804 kB' 'Buffers: 3900 kB' 'Cached: 15787728 kB' 'SwapCached: 0 kB' 'Active: 12633796 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199836 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539924 kB' 'Mapped: 166100 kB' 'Shmem: 11663060 kB' 'KReclaimable: 419284 kB' 'Slab: 702944 kB' 'SReclaimable: 419284 kB' 'SUnreclaim: 283660 kB' 'KernelStack: 9952 kB' 'PageTables: 7068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 13179680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189836 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.169 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.170 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31082256 kB' 'MemAvailable: 35018444 kB' 'Buffers: 3900 kB' 'Cached: 15787732 kB' 'SwapCached: 0 kB' 'Active: 12633776 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199816 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539868 kB' 'Mapped: 166040 kB' 'Shmem: 11663064 kB' 'KReclaimable: 419284 kB' 'Slab: 702940 kB' 'SReclaimable: 419284 kB' 'SUnreclaim: 283656 kB' 'KernelStack: 9984 kB' 'PageTables: 7128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 13179700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.171 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.172 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31082800 kB' 'MemAvailable: 35018988 kB' 'Buffers: 3900 kB' 'Cached: 15787768 kB' 'SwapCached: 0 kB' 'Active: 12633376 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199416 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539456 kB' 'Mapped: 166040 kB' 'Shmem: 11663100 kB' 'KReclaimable: 419284 kB' 'Slab: 702988 kB' 'SReclaimable: 419284 kB' 'SUnreclaim: 283704 kB' 'KernelStack: 9968 kB' 'PageTables: 7104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 13179720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.173 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.174 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:29.175 nr_hugepages=1536 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.175 resv_hugepages=0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.175 surplus_hugepages=0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.175 anon_hugepages=0 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31082800 kB' 'MemAvailable: 35018988 kB' 'Buffers: 3900 kB' 'Cached: 15787768 kB' 'SwapCached: 0 kB' 'Active: 12633712 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199752 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539848 kB' 'Mapped: 166040 kB' 'Shmem: 11663100 kB' 'KReclaimable: 419284 kB' 'Slab: 702988 kB' 'SReclaimable: 419284 kB' 'SUnreclaim: 283704 kB' 'KernelStack: 9984 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 13179740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.176 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19184340 kB' 'MemUsed: 13697408 kB' 'SwapCached: 0 kB' 'Active: 8237132 kB' 'Inactive: 3394500 kB' 'Active(anon): 8044584 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11333076 kB' 'Mapped: 93940 kB' 'AnonPages: 301628 kB' 'Shmem: 7746028 kB' 'KernelStack: 5896 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 426004 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 11898460 kB' 'MemUsed: 7510972 kB' 'SwapCached: 0 kB' 'Active: 4396264 kB' 'Inactive: 300108 kB' 'Active(anon): 4154852 kB' 'Inactive(anon): 0 kB' 'Active(file): 241412 kB' 'Inactive(file): 300108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4458616 kB' 'Mapped: 72100 kB' 'AnonPages: 237800 kB' 'Shmem: 3917096 kB' 'KernelStack: 4072 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139664 kB' 'Slab: 276984 kB' 'SReclaimable: 139664 kB' 'SUnreclaim: 137320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.488 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.489 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.490 node0=512 expecting 512 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:29.490 node1=1024 expecting 1024 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:29.490 00:04:29.490 real 0m1.154s 00:04:29.490 user 0m0.507s 00:04:29.490 sys 0m0.673s 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.490 10:25:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.490 ************************************ 00:04:29.490 END TEST custom_alloc 00:04:29.490 ************************************ 00:04:29.490 10:25:17 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.490 10:25:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.490 10:25:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.490 10:25:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.490 ************************************ 00:04:29.490 START TEST no_shrink_alloc 00:04:29.490 ************************************ 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.490 10:25:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.444 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:30.444 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.444 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:30.444 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:30.444 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:30.444 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:30.444 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:30.444 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:30.444 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:30.444 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:30.444 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:30.444 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:30.444 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:30.444 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:30.444 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:30.444 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:30.444 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:30.444 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.444 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.444 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.444 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.444 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32131600 kB' 'MemAvailable: 36067780 kB' 'Buffers: 3900 kB' 'Cached: 15787856 kB' 'SwapCached: 0 kB' 'Active: 12633688 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199728 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539688 kB' 'Mapped: 166152 kB' 'Shmem: 11663188 kB' 'KReclaimable: 419276 kB' 'Slab: 703024 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283748 kB' 'KernelStack: 9952 kB' 'PageTables: 7076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13179808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189804 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.445 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32132540 kB' 'MemAvailable: 36068720 kB' 'Buffers: 3900 kB' 'Cached: 15787856 kB' 'SwapCached: 0 kB' 'Active: 12634028 kB' 'Inactive: 3694608 kB' 'Active(anon): 12200068 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540028 kB' 'Mapped: 166128 kB' 'Shmem: 11663188 kB' 'KReclaimable: 419276 kB' 'Slab: 703008 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283732 kB' 'KernelStack: 9968 kB' 'PageTables: 7100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13179824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.446 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.447 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32132252 kB' 'MemAvailable: 36068432 kB' 'Buffers: 3900 kB' 'Cached: 15787876 kB' 'SwapCached: 0 kB' 'Active: 12633916 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199956 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539900 kB' 'Mapped: 166052 kB' 'Shmem: 11663208 kB' 'KReclaimable: 419276 kB' 'Slab: 703016 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283740 kB' 'KernelStack: 9984 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13179848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.448 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.449 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.450 nr_hugepages=1024 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.450 resv_hugepages=0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.450 surplus_hugepages=0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.450 anon_hugepages=0 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32131748 kB' 'MemAvailable: 36067928 kB' 'Buffers: 3900 kB' 'Cached: 15787896 kB' 'SwapCached: 0 kB' 'Active: 12633928 kB' 'Inactive: 3694608 kB' 'Active(anon): 12199968 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539900 kB' 'Mapped: 166052 kB' 'Shmem: 11663228 kB' 'KReclaimable: 419276 kB' 'Slab: 703016 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283740 kB' 'KernelStack: 9984 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13179872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189804 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.450 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.451 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 18135584 kB' 'MemUsed: 14746164 kB' 'SwapCached: 0 kB' 'Active: 8240644 kB' 'Inactive: 3394500 kB' 'Active(anon): 8048096 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11333116 kB' 'Mapped: 94388 kB' 'AnonPages: 305104 kB' 'Shmem: 7746068 kB' 'KernelStack: 5832 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279620 kB' 'Slab: 425892 kB' 'SReclaimable: 279620 kB' 'SUnreclaim: 146272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.452 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.453 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.454 node0=1024 expecting 1024 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.454 10:25:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.389 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:31.389 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.389 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:31.389 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:31.389 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:31.653 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:31.653 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:31.653 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:31.653 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:31.653 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:31.653 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:31.653 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:31.653 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:31.653 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:31.653 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:31.653 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:31.653 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:31.653 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32116376 kB' 'MemAvailable: 36052556 kB' 'Buffers: 3900 kB' 'Cached: 15787960 kB' 'SwapCached: 0 kB' 'Active: 12634656 kB' 'Inactive: 3694608 kB' 'Active(anon): 12200696 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540480 kB' 'Mapped: 166124 kB' 'Shmem: 11663292 kB' 'KReclaimable: 419276 kB' 'Slab: 703108 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283832 kB' 'KernelStack: 9952 kB' 'PageTables: 7060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13180048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189868 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32118772 kB' 'MemAvailable: 36054952 kB' 'Buffers: 3900 kB' 'Cached: 15787964 kB' 'SwapCached: 0 kB' 'Active: 12634524 kB' 'Inactive: 3694608 kB' 'Active(anon): 12200564 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540460 kB' 'Mapped: 166060 kB' 'Shmem: 11663296 kB' 'KReclaimable: 419276 kB' 'Slab: 703076 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283800 kB' 'KernelStack: 10016 kB' 'PageTables: 7220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13180064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32119188 kB' 'MemAvailable: 36055368 kB' 'Buffers: 3900 kB' 'Cached: 15787984 kB' 'SwapCached: 0 kB' 'Active: 12634064 kB' 'Inactive: 3694608 kB' 'Active(anon): 12200104 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540028 kB' 'Mapped: 166060 kB' 'Shmem: 11663316 kB' 'KReclaimable: 419276 kB' 'Slab: 703124 kB' 'SReclaimable: 419276 kB' 'SUnreclaim: 283848 kB' 'KernelStack: 9984 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13179772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:31.656 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.658 nr_hugepages=1024 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.658 resv_hugepages=0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.658 surplus_hugepages=0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.658 anon_hugepages=0 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.658 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32115160 kB' 'MemAvailable: 36051348 kB' 'Buffers: 3900 kB' 'Cached: 15788004 kB' 'SwapCached: 0 kB' 'Active: 12637976 kB' 'Inactive: 3694608 kB' 'Active(anon): 12204016 kB' 'Inactive(anon): 0 kB' 'Active(file): 433960 kB' 'Inactive(file): 3694608 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543856 kB' 'Mapped: 166496 kB' 'Shmem: 11663336 kB' 'KReclaimable: 419284 kB' 'Slab: 703120 kB' 'SReclaimable: 419284 kB' 'SUnreclaim: 283836 kB' 'KernelStack: 9968 kB' 'PageTables: 7096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 13184104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 24192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1530148 kB' 'DirectMap2M: 29849600 kB' 'DirectMap1G: 29360128 kB' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.660 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 18135900 kB' 'MemUsed: 14745848 kB' 'SwapCached: 0 kB' 'Active: 8237524 kB' 'Inactive: 3394500 kB' 'Active(anon): 8044976 kB' 'Inactive(anon): 0 kB' 'Active(file): 192548 kB' 'Inactive(file): 3394500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11333128 kB' 'Mapped: 94724 kB' 'AnonPages: 302024 kB' 'Shmem: 7746080 kB' 'KernelStack: 5880 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279628 kB' 'Slab: 425948 kB' 'SReclaimable: 279628 kB' 'SUnreclaim: 146320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.662 node0=1024 expecting 1024 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.662 00:04:31.662 real 0m2.382s 00:04:31.662 user 0m1.041s 00:04:31.662 sys 0m1.396s 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.662 10:25:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.662 ************************************ 00:04:31.662 END TEST no_shrink_alloc 00:04:31.662 ************************************ 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.920 10:25:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.920 00:04:31.920 real 0m9.778s 00:04:31.920 user 0m3.957s 00:04:31.920 sys 0m5.188s 00:04:31.920 10:25:20 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.920 10:25:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.920 ************************************ 00:04:31.920 END TEST hugepages 00:04:31.920 ************************************ 00:04:31.920 10:25:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:31.920 10:25:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.920 10:25:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.920 10:25:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.920 ************************************ 00:04:31.920 START TEST driver 00:04:31.920 ************************************ 00:04:31.920 10:25:20 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:31.920 * Looking for test storage... 00:04:31.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.920 10:25:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:31.920 10:25:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.920 10:25:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.457 10:25:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:34.457 10:25:22 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.457 10:25:22 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.457 10:25:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.457 ************************************ 00:04:34.457 START TEST guess_driver 00:04:34.457 ************************************ 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:34.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:34.457 Looking for driver=vfio-pci 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.457 10:25:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.393 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.394 10:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.331 10:25:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.863 00:04:38.863 real 0m4.464s 00:04:38.863 user 0m1.057s 00:04:38.863 sys 0m1.660s 00:04:38.863 10:25:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.864 10:25:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.864 ************************************ 00:04:38.864 END TEST guess_driver 00:04:38.864 ************************************ 00:04:38.864 00:04:38.864 real 0m6.828s 00:04:38.864 user 0m1.623s 00:04:38.864 sys 0m2.584s 00:04:38.864 10:25:27 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.864 10:25:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.864 ************************************ 00:04:38.864 END TEST driver 00:04:38.864 ************************************ 00:04:38.864 10:25:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.864 10:25:27 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.864 10:25:27 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.864 10:25:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.864 ************************************ 00:04:38.864 START TEST devices 00:04:38.864 ************************************ 00:04:38.864 10:25:27 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.864 * Looking for test storage... 00:04:38.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.864 10:25:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.864 10:25:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.864 10:25:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.864 10:25:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.240 10:25:28 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:40.240 10:25:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:40.241 10:25:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:40.241 No valid GPT data, bailing 00:04:40.241 10:25:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:40.241 10:25:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:40.241 10:25:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:40.241 10:25:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:40.241 10:25:28 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.241 10:25:28 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.241 10:25:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.241 ************************************ 00:04:40.241 START TEST nvme_mount 00:04:40.241 ************************************ 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.241 10:25:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.176 Creating new GPT entries in memory. 00:04:41.176 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.176 other utilities. 00:04:41.176 10:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.176 10:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.176 10:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.176 10:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.176 10:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.111 Creating new GPT entries in memory. 00:04:42.111 The operation has completed successfully. 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3712153 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:42.111 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.369 10:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.305 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.305 10:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.563 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:43.563 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:43.563 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.563 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.563 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:43.563 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:43.564 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.564 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:43.564 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:43.564 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.822 10:25:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.759 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.760 10:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.695 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.695 00:04:45.695 real 0m5.607s 00:04:45.695 user 0m1.256s 00:04:45.695 sys 0m2.062s 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.695 10:25:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.695 ************************************ 00:04:45.695 END TEST nvme_mount 00:04:45.695 ************************************ 00:04:45.695 10:25:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:45.695 10:25:34 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.695 10:25:34 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.695 10:25:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.954 ************************************ 00:04:45.954 START TEST dm_mount 00:04:45.954 ************************************ 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.954 10:25:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:46.890 Creating new GPT entries in memory. 00:04:46.890 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.890 other utilities. 00:04:46.890 10:25:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.890 10:25:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.890 10:25:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.890 10:25:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.890 10:25:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:47.825 Creating new GPT entries in memory. 00:04:47.825 The operation has completed successfully. 00:04:47.825 10:25:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.825 10:25:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.825 10:25:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.825 10:25:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.825 10:25:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:48.761 The operation has completed successfully. 00:04:48.761 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.761 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.761 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3713922 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.020 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.021 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:49.021 10:25:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.021 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.021 10:25:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.957 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.958 10:25:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.896 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.156 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.156 00:04:51.156 real 0m5.231s 00:04:51.156 user 0m0.780s 00:04:51.156 sys 0m1.391s 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.156 10:25:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:51.156 ************************************ 00:04:51.156 END TEST dm_mount 00:04:51.156 ************************************ 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.156 10:25:39 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.416 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.416 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.416 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.416 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.416 10:25:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.416 00:04:51.416 real 0m12.651s 00:04:51.416 user 0m2.664s 00:04:51.416 sys 0m4.456s 00:04:51.416 10:25:39 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.416 10:25:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.416 ************************************ 00:04:51.416 END TEST devices 00:04:51.416 ************************************ 00:04:51.416 00:04:51.416 real 0m38.889s 00:04:51.416 user 0m11.283s 00:04:51.416 sys 0m17.152s 00:04:51.416 10:25:39 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.416 10:25:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.416 ************************************ 00:04:51.416 END TEST setup.sh 00:04:51.416 ************************************ 00:04:51.416 10:25:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.355 Hugepages 00:04:52.355 node hugesize free / total 00:04:52.355 node0 1048576kB 0 / 0 00:04:52.355 node0 2048kB 2048 / 2048 00:04:52.355 node1 1048576kB 0 / 0 00:04:52.355 node1 2048kB 0 / 0 00:04:52.355 00:04:52.355 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.355 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:04:52.355 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:04:52.355 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:04:52.614 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:52.614 10:25:40 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.614 10:25:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.614 10:25:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.614 10:25:40 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.555 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:53.555 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:53.555 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:54.496 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.496 10:25:42 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:55.433 10:25:43 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:55.433 10:25:43 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:55.433 10:25:43 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.433 10:25:43 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:55.433 10:25:43 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:55.433 10:25:43 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:55.433 10:25:43 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.433 10:25:43 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.433 10:25:43 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:55.692 10:25:43 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:55.692 10:25:43 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:04:55.692 10:25:43 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.627 Waiting for block devices as requested 00:04:56.627 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:04:56.627 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:04:56.627 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:04:56.627 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:04:56.885 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:04:56.885 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:04:56.885 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:04:56.885 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:04:57.144 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:04:57.144 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:04:57.144 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:04:57.403 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:04:57.403 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:04:57.403 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:04:57.403 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:04:57.666 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:04:57.666 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:04:57.666 10:25:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:57.666 10:25:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1498 -- # grep 0000:84:00.0/nvme/nvme 00:04:57.666 10:25:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:04:57.666 10:25:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:57.666 10:25:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:57.666 10:25:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:57.666 10:25:46 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:04:57.666 10:25:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:57.666 10:25:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:57.666 10:25:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:57.666 10:25:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:57.666 10:25:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:57.666 10:25:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:57.666 10:25:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:57.666 10:25:46 -- common/autotest_common.sh@1553 -- # continue 00:04:57.666 10:25:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:57.666 10:25:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.666 10:25:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.666 10:25:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:57.666 10:25:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.666 10:25:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.666 10:25:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.671 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:58.671 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:58.671 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:58.671 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:58.671 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:58.931 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:58.931 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:58.931 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:58.931 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:59.869 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.869 10:25:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:59.869 10:25:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.869 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.869 10:25:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:59.869 10:25:48 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:59.869 10:25:48 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.869 10:25:48 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:59.869 10:25:48 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:59.869 10:25:48 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:59.869 10:25:48 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:59.869 10:25:48 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:59.869 10:25:48 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.869 10:25:48 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.869 10:25:48 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:59.869 10:25:48 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:59.869 10:25:48 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:04:59.869 10:25:48 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:59.869 10:25:48 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:04:59.869 10:25:48 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:59.869 10:25:48 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:59.869 10:25:48 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:59.869 10:25:48 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:84:00.0 00:04:59.869 10:25:48 -- common/autotest_common.sh@1588 -- # [[ -z 0000:84:00.0 ]] 00:04:59.869 10:25:48 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3717928 00:04:59.869 10:25:48 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.869 10:25:48 -- common/autotest_common.sh@1594 -- # waitforlisten 3717928 00:04:59.869 10:25:48 -- common/autotest_common.sh@827 -- # '[' -z 3717928 ']' 00:04:59.869 10:25:48 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.869 10:25:48 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.869 10:25:48 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.869 10:25:48 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.869 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.127 [2024-07-23 10:25:48.396335] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:00.127 [2024-07-23 10:25:48.396435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717928 ] 00:05:00.127 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.127 [2024-07-23 10:25:48.459904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.127 [2024-07-23 10:25:48.551075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.386 10:25:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.386 10:25:48 -- common/autotest_common.sh@860 -- # return 0 00:05:00.386 10:25:48 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:00.386 10:25:48 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:00.386 10:25:48 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:05:03.672 nvme0n1 00:05:03.672 10:25:51 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:03.672 [2024-07-23 10:25:52.151412] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:03.672 [2024-07-23 10:25:52.151461] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:03.672 request: 00:05:03.672 { 00:05:03.672 "nvme_ctrlr_name": "nvme0", 00:05:03.672 "password": "test", 00:05:03.672 "method": "bdev_nvme_opal_revert", 00:05:03.672 "req_id": 1 00:05:03.672 } 00:05:03.672 Got JSON-RPC error response 00:05:03.672 response: 00:05:03.672 { 00:05:03.672 "code": -32603, 00:05:03.672 "message": "Internal error" 00:05:03.672 } 00:05:03.672 10:25:52 -- common/autotest_common.sh@1600 -- # true 00:05:03.672 10:25:52 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:03.672 10:25:52 -- common/autotest_common.sh@1604 -- # killprocess 3717928 00:05:03.672 10:25:52 -- common/autotest_common.sh@946 -- # '[' -z 3717928 ']' 00:05:03.672 10:25:52 -- common/autotest_common.sh@950 -- # kill -0 3717928 00:05:03.672 10:25:52 -- common/autotest_common.sh@951 -- # uname 00:05:03.931 10:25:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.931 10:25:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3717928 00:05:03.931 10:25:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:03.931 10:25:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:03.931 10:25:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3717928' 00:05:03.931 killing process with pid 3717928 00:05:03.931 10:25:52 -- common/autotest_common.sh@965 -- # kill 3717928 00:05:03.931 10:25:52 -- common/autotest_common.sh@970 -- # wait 3717928 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.931 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.932 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.933 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.322 10:25:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:05.322 10:25:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:05.322 10:25:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.322 10:25:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.322 10:25:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.322 10:25:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:05.322 10:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:05.322 10:25:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:05.322 10:25:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.322 10:25:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.322 10:25:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.322 10:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 START TEST env 00:05:05.581 ************************************ 00:05:05.581 10:25:53 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.581 * Looking for test storage... 00:05:05.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:05.581 10:25:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.581 10:25:53 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.581 10:25:53 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.581 10:25:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 START TEST env_memory 00:05:05.581 ************************************ 00:05:05.581 10:25:53 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.581 00:05:05.581 00:05:05.581 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.581 http://cunit.sourceforge.net/ 00:05:05.581 00:05:05.581 00:05:05.581 Suite: memory 00:05:05.581 Test: alloc and free memory map ...[2024-07-23 10:25:53.954094] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.581 passed 00:05:05.581 Test: mem map translation ...[2024-07-23 10:25:53.984786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.581 [2024-07-23 10:25:53.984813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.581 [2024-07-23 10:25:53.984865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.581 [2024-07-23 10:25:53.984880] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.581 passed 00:05:05.581 Test: mem map registration ...[2024-07-23 10:25:54.049236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.581 [2024-07-23 10:25:54.049273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.581 passed 00:05:05.841 Test: mem map adjacent registrations ...passed 00:05:05.841 00:05:05.841 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.841 suites 1 1 n/a 0 0 00:05:05.841 tests 4 4 4 0 0 00:05:05.841 asserts 152 152 152 0 n/a 00:05:05.841 00:05:05.841 Elapsed time = 0.216 seconds 00:05:05.841 00:05:05.841 real 0m0.225s 00:05:05.841 user 0m0.215s 00:05:05.841 sys 0m0.008s 00:05:05.841 10:25:54 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.841 10:25:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.841 ************************************ 00:05:05.841 END TEST env_memory 00:05:05.841 ************************************ 00:05:05.841 10:25:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.841 10:25:54 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.841 10:25:54 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.841 10:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.841 ************************************ 00:05:05.841 START TEST env_vtophys 00:05:05.841 ************************************ 00:05:05.841 10:25:54 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.841 EAL: lib.eal log level changed from notice to debug 00:05:05.841 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.841 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.841 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.841 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.841 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.841 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.841 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.841 EAL: Detected lcore 7 as core 7 on socket 0 00:05:05.841 EAL: Detected lcore 8 as core 0 on socket 1 00:05:05.841 EAL: Detected lcore 9 as core 1 on socket 1 00:05:05.841 EAL: Detected lcore 10 as core 2 on socket 1 00:05:05.841 EAL: Detected lcore 11 as core 3 on socket 1 00:05:05.841 EAL: Detected lcore 12 as core 4 on socket 1 00:05:05.841 EAL: Detected lcore 13 as core 5 on socket 1 00:05:05.841 EAL: Detected lcore 14 as core 6 on socket 1 00:05:05.841 EAL: Detected lcore 15 as core 7 on socket 1 00:05:05.841 EAL: Detected lcore 16 as core 0 on socket 0 00:05:05.841 EAL: Detected lcore 17 as core 1 on socket 0 00:05:05.841 EAL: Detected lcore 18 as core 2 on socket 0 00:05:05.841 EAL: Detected lcore 19 as core 3 on socket 0 00:05:05.841 EAL: Detected lcore 20 as core 4 on socket 0 00:05:05.841 EAL: Detected lcore 21 as core 5 on socket 0 00:05:05.841 EAL: Detected lcore 22 as core 6 on socket 0 00:05:05.841 EAL: Detected lcore 23 as core 7 on socket 0 00:05:05.841 EAL: Detected lcore 24 as core 0 on socket 1 00:05:05.841 EAL: Detected lcore 25 as core 1 on socket 1 00:05:05.841 EAL: Detected lcore 26 as core 2 on socket 1 00:05:05.841 EAL: Detected lcore 27 as core 3 on socket 1 00:05:05.841 EAL: Detected lcore 28 as core 4 on socket 1 00:05:05.841 EAL: Detected lcore 29 as core 5 on socket 1 00:05:05.841 EAL: Detected lcore 30 as core 6 on socket 1 00:05:05.841 EAL: Detected lcore 31 as core 7 on socket 1 00:05:05.841 EAL: Maximum logical cores by configuration: 128 00:05:05.841 EAL: Detected CPU lcores: 32 00:05:05.841 EAL: Detected NUMA nodes: 2 00:05:05.841 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:05.841 EAL: Detected shared linkage of DPDK 00:05:05.841 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:05.841 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:05.841 EAL: Registered [vdev] bus. 00:05:05.841 EAL: bus.vdev log level changed from disabled to notice 00:05:05.841 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:05.842 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:05.842 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.842 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.842 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:05.842 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:05.842 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:05.842 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:05.842 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Bus pci wants IOVA as 'DC' 00:05:05.842 EAL: Bus vdev wants IOVA as 'DC' 00:05:05.842 EAL: Buses did not request a specific IOVA mode. 00:05:05.842 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.842 EAL: Selected IOVA mode 'VA' 00:05:05.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.842 EAL: Probing VFIO support... 00:05:05.842 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.842 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.842 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.842 EAL: VFIO support initialized 00:05:05.842 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.842 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.842 EAL: Setting up physically contiguous memory... 00:05:05.842 EAL: Setting maximum number of open files to 524288 00:05:05.842 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.842 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.842 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.842 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.842 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.842 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.842 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.842 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.842 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.842 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.842 EAL: Hugepages will be freed exactly as allocated. 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: TSC frequency is ~2700000 KHz 00:05:05.842 EAL: Main lcore 0 is ready (tid=7ff5db0c6a00;cpuset=[0]) 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 0 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.842 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.842 00:05:05.842 00:05:05.842 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.842 http://cunit.sourceforge.net/ 00:05:05.842 00:05:05.842 00:05:05.842 Suite: components_suite 00:05:05.842 Test: vtophys_malloc_test ...passed 00:05:05.842 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.842 EAL: Trying to obtain current memory policy. 00:05:05.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.842 EAL: Restoring previous memory policy: 4 00:05:05.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.842 EAL: request: mp_malloc_sync 00:05:05.842 EAL: No shared files mode enabled, IPC is disabled 00:05:05.842 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.102 EAL: request: mp_malloc_sync 00:05:06.102 EAL: No shared files mode enabled, IPC is disabled 00:05:06.102 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.102 EAL: Trying to obtain current memory policy. 00:05:06.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.102 EAL: Restoring previous memory policy: 4 00:05:06.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.102 EAL: request: mp_malloc_sync 00:05:06.102 EAL: No shared files mode enabled, IPC is disabled 00:05:06.102 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.102 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.102 EAL: request: mp_malloc_sync 00:05:06.102 EAL: No shared files mode enabled, IPC is disabled 00:05:06.102 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.102 EAL: Trying to obtain current memory policy. 00:05:06.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.361 EAL: Restoring previous memory policy: 4 00:05:06.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.361 EAL: request: mp_malloc_sync 00:05:06.361 EAL: No shared files mode enabled, IPC is disabled 00:05:06.361 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.361 EAL: request: mp_malloc_sync 00:05:06.361 EAL: No shared files mode enabled, IPC is disabled 00:05:06.361 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.361 EAL: Trying to obtain current memory policy. 00:05:06.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.620 EAL: Restoring previous memory policy: 4 00:05:06.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.620 EAL: request: mp_malloc_sync 00:05:06.620 EAL: No shared files mode enabled, IPC is disabled 00:05:06.620 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.880 EAL: request: mp_malloc_sync 00:05:06.880 EAL: No shared files mode enabled, IPC is disabled 00:05:06.880 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.880 passed 00:05:06.880 00:05:06.880 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.880 suites 1 1 n/a 0 0 00:05:06.880 tests 2 2 2 0 0 00:05:06.880 asserts 497 497 497 0 n/a 00:05:06.880 00:05:06.880 Elapsed time = 0.949 seconds 00:05:06.880 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.880 EAL: request: mp_malloc_sync 00:05:06.880 EAL: No shared files mode enabled, IPC is disabled 00:05:06.880 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.880 EAL: No shared files mode enabled, IPC is disabled 00:05:06.880 EAL: No shared files mode enabled, IPC is disabled 00:05:06.880 EAL: No shared files mode enabled, IPC is disabled 00:05:06.880 00:05:06.880 real 0m1.060s 00:05:06.880 user 0m0.519s 00:05:06.880 sys 0m0.508s 00:05:06.880 10:25:55 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.880 10:25:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.880 ************************************ 00:05:06.880 END TEST env_vtophys 00:05:06.880 ************************************ 00:05:06.880 10:25:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.880 10:25:55 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.880 10:25:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.880 10:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.881 ************************************ 00:05:06.881 START TEST env_pci 00:05:06.881 ************************************ 00:05:06.881 10:25:55 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.881 00:05:06.881 00:05:06.881 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.881 http://cunit.sourceforge.net/ 00:05:06.881 00:05:06.881 00:05:06.881 Suite: pci 00:05:06.881 Test: pci_hook ...[2024-07-23 10:25:55.307436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3718622 has claimed it 00:05:06.881 EAL: Cannot find device (10000:00:01.0) 00:05:06.881 EAL: Failed to attach device on primary process 00:05:06.881 passed 00:05:06.881 00:05:06.881 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.881 suites 1 1 n/a 0 0 00:05:06.881 tests 1 1 1 0 0 00:05:06.881 asserts 25 25 25 0 n/a 00:05:06.881 00:05:06.881 Elapsed time = 0.018 seconds 00:05:06.881 00:05:06.881 real 0m0.031s 00:05:06.881 user 0m0.013s 00:05:06.881 sys 0m0.018s 00:05:06.881 10:25:55 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.881 10:25:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.881 ************************************ 00:05:06.881 END TEST env_pci 00:05:06.881 ************************************ 00:05:06.881 10:25:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.881 10:25:55 env -- env/env.sh@15 -- # uname 00:05:06.881 10:25:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.881 10:25:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.881 10:25:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.881 10:25:55 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:06.881 10:25:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.881 10:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.140 ************************************ 00:05:07.140 START TEST env_dpdk_post_init 00:05:07.140 ************************************ 00:05:07.140 10:25:55 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.140 EAL: Detected CPU lcores: 32 00:05:07.140 EAL: Detected NUMA nodes: 2 00:05:07.140 EAL: Detected shared linkage of DPDK 00:05:07.140 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.140 EAL: Selected IOVA mode 'VA' 00:05:07.140 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.140 EAL: VFIO support initialized 00:05:07.140 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.140 EAL: Using IOMMU type 1 (Type 1) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:05:07.140 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:05:07.400 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:05:07.400 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:05:07.400 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:05:07.967 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:05:11.248 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:05:11.248 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:05:11.507 Starting DPDK initialization... 00:05:11.507 Starting SPDK post initialization... 00:05:11.507 SPDK NVMe probe 00:05:11.507 Attaching to 0000:84:00.0 00:05:11.507 Attached to 0000:84:00.0 00:05:11.507 Cleaning up... 00:05:11.507 00:05:11.507 real 0m4.386s 00:05:11.507 user 0m3.269s 00:05:11.507 sys 0m0.177s 00:05:11.507 10:25:59 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.507 10:25:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 ************************************ 00:05:11.507 END TEST env_dpdk_post_init 00:05:11.507 ************************************ 00:05:11.507 10:25:59 env -- env/env.sh@26 -- # uname 00:05:11.507 10:25:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.507 10:25:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.507 10:25:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.507 10:25:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.507 10:25:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 ************************************ 00:05:11.507 START TEST env_mem_callbacks 00:05:11.507 ************************************ 00:05:11.507 10:25:59 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.507 EAL: Detected CPU lcores: 32 00:05:11.507 EAL: Detected NUMA nodes: 2 00:05:11.507 EAL: Detected shared linkage of DPDK 00:05:11.507 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.507 EAL: Selected IOVA mode 'VA' 00:05:11.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.507 EAL: VFIO support initialized 00:05:11.507 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.507 00:05:11.507 00:05:11.507 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.507 http://cunit.sourceforge.net/ 00:05:11.507 00:05:11.507 00:05:11.507 Suite: memory 00:05:11.507 Test: test ... 00:05:11.507 register 0x200000200000 2097152 00:05:11.507 malloc 3145728 00:05:11.507 register 0x200000400000 4194304 00:05:11.507 buf 0x200000500000 len 3145728 PASSED 00:05:11.507 malloc 64 00:05:11.507 buf 0x2000004fff40 len 64 PASSED 00:05:11.507 malloc 4194304 00:05:11.507 register 0x200000800000 6291456 00:05:11.507 buf 0x200000a00000 len 4194304 PASSED 00:05:11.507 free 0x200000500000 3145728 00:05:11.507 free 0x2000004fff40 64 00:05:11.507 unregister 0x200000400000 4194304 PASSED 00:05:11.507 free 0x200000a00000 4194304 00:05:11.507 unregister 0x200000800000 6291456 PASSED 00:05:11.507 malloc 8388608 00:05:11.507 register 0x200000400000 10485760 00:05:11.507 buf 0x200000600000 len 8388608 PASSED 00:05:11.507 free 0x200000600000 8388608 00:05:11.507 unregister 0x200000400000 10485760 PASSED 00:05:11.507 passed 00:05:11.507 00:05:11.507 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.507 suites 1 1 n/a 0 0 00:05:11.507 tests 1 1 1 0 0 00:05:11.507 asserts 15 15 15 0 n/a 00:05:11.507 00:05:11.507 Elapsed time = 0.005 seconds 00:05:11.507 00:05:11.507 real 0m0.047s 00:05:11.507 user 0m0.017s 00:05:11.507 sys 0m0.030s 00:05:11.507 10:25:59 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.507 10:25:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 ************************************ 00:05:11.507 END TEST env_mem_callbacks 00:05:11.507 ************************************ 00:05:11.507 00:05:11.507 real 0m6.071s 00:05:11.507 user 0m4.148s 00:05:11.507 sys 0m0.962s 00:05:11.507 10:25:59 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.507 10:25:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 ************************************ 00:05:11.507 END TEST env 00:05:11.507 ************************************ 00:05:11.507 10:25:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.507 10:25:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.507 10:25:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.507 10:25:59 -- common/autotest_common.sh@10 -- # set +x 00:05:11.507 ************************************ 00:05:11.507 START TEST rpc 00:05:11.507 ************************************ 00:05:11.507 10:25:59 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.507 * Looking for test storage... 00:05:11.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.766 10:26:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3719153 00:05:11.766 10:26:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.766 10:26:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.766 10:26:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3719153 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@827 -- # '[' -z 3719153 ']' 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.766 10:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.766 [2024-07-23 10:26:00.068996] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:11.766 [2024-07-23 10:26:00.069099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719153 ] 00:05:11.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.766 [2024-07-23 10:26:00.128866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.766 [2024-07-23 10:26:00.216413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.766 [2024-07-23 10:26:00.216484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3719153' to capture a snapshot of events at runtime. 00:05:11.766 [2024-07-23 10:26:00.216501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.766 [2024-07-23 10:26:00.216515] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.766 [2024-07-23 10:26:00.216527] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3719153 for offline analysis/debug. 00:05:11.766 [2024-07-23 10:26:00.216573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.024 10:26:00 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.024 10:26:00 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:12.024 10:26:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.024 10:26:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.024 10:26:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.024 10:26:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.024 10:26:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.024 10:26:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.024 10:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.024 ************************************ 00:05:12.024 START TEST rpc_integrity 00:05:12.024 ************************************ 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.024 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.024 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.283 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.283 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.283 { 00:05:12.283 "name": "Malloc0", 00:05:12.283 "aliases": [ 00:05:12.283 "daf4192e-6cb2-42da-871e-4a7bb76c8531" 00:05:12.283 ], 00:05:12.283 "product_name": "Malloc disk", 00:05:12.283 "block_size": 512, 00:05:12.283 "num_blocks": 16384, 00:05:12.283 "uuid": "daf4192e-6cb2-42da-871e-4a7bb76c8531", 00:05:12.283 "assigned_rate_limits": { 00:05:12.283 "rw_ios_per_sec": 0, 00:05:12.283 "rw_mbytes_per_sec": 0, 00:05:12.283 "r_mbytes_per_sec": 0, 00:05:12.283 "w_mbytes_per_sec": 0 00:05:12.283 }, 00:05:12.283 "claimed": false, 00:05:12.283 "zoned": false, 00:05:12.283 "supported_io_types": { 00:05:12.283 "read": true, 00:05:12.283 "write": true, 00:05:12.283 "unmap": true, 00:05:12.283 "write_zeroes": true, 00:05:12.283 "flush": true, 00:05:12.283 "reset": true, 00:05:12.283 "compare": false, 00:05:12.283 "compare_and_write": false, 00:05:12.283 "abort": true, 00:05:12.283 "nvme_admin": false, 00:05:12.283 "nvme_io": false 00:05:12.283 }, 00:05:12.283 "memory_domains": [ 00:05:12.283 { 00:05:12.283 "dma_device_id": "system", 00:05:12.283 "dma_device_type": 1 00:05:12.283 }, 00:05:12.283 { 00:05:12.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.283 "dma_device_type": 2 00:05:12.283 } 00:05:12.283 ], 00:05:12.283 "driver_specific": {} 00:05:12.283 } 00:05:12.283 ]' 00:05:12.283 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.283 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.283 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.283 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.283 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.283 [2024-07-23 10:26:00.572796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.283 [2024-07-23 10:26:00.572850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.283 [2024-07-23 10:26:00.572873] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ae5f0 00:05:12.284 [2024-07-23 10:26:00.572887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.284 [2024-07-23 10:26:00.574419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.284 [2024-07-23 10:26:00.574453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.284 Passthru0 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.284 { 00:05:12.284 "name": "Malloc0", 00:05:12.284 "aliases": [ 00:05:12.284 "daf4192e-6cb2-42da-871e-4a7bb76c8531" 00:05:12.284 ], 00:05:12.284 "product_name": "Malloc disk", 00:05:12.284 "block_size": 512, 00:05:12.284 "num_blocks": 16384, 00:05:12.284 "uuid": "daf4192e-6cb2-42da-871e-4a7bb76c8531", 00:05:12.284 "assigned_rate_limits": { 00:05:12.284 "rw_ios_per_sec": 0, 00:05:12.284 "rw_mbytes_per_sec": 0, 00:05:12.284 "r_mbytes_per_sec": 0, 00:05:12.284 "w_mbytes_per_sec": 0 00:05:12.284 }, 00:05:12.284 "claimed": true, 00:05:12.284 "claim_type": "exclusive_write", 00:05:12.284 "zoned": false, 00:05:12.284 "supported_io_types": { 00:05:12.284 "read": true, 00:05:12.284 "write": true, 00:05:12.284 "unmap": true, 00:05:12.284 "write_zeroes": true, 00:05:12.284 "flush": true, 00:05:12.284 "reset": true, 00:05:12.284 "compare": false, 00:05:12.284 "compare_and_write": false, 00:05:12.284 "abort": true, 00:05:12.284 "nvme_admin": false, 00:05:12.284 "nvme_io": false 00:05:12.284 }, 00:05:12.284 "memory_domains": [ 00:05:12.284 { 00:05:12.284 "dma_device_id": "system", 00:05:12.284 "dma_device_type": 1 00:05:12.284 }, 00:05:12.284 { 00:05:12.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.284 "dma_device_type": 2 00:05:12.284 } 00:05:12.284 ], 00:05:12.284 "driver_specific": {} 00:05:12.284 }, 00:05:12.284 { 00:05:12.284 "name": "Passthru0", 00:05:12.284 "aliases": [ 00:05:12.284 "bef23212-68a0-5747-b358-ad7df7ad378a" 00:05:12.284 ], 00:05:12.284 "product_name": "passthru", 00:05:12.284 "block_size": 512, 00:05:12.284 "num_blocks": 16384, 00:05:12.284 "uuid": "bef23212-68a0-5747-b358-ad7df7ad378a", 00:05:12.284 "assigned_rate_limits": { 00:05:12.284 "rw_ios_per_sec": 0, 00:05:12.284 "rw_mbytes_per_sec": 0, 00:05:12.284 "r_mbytes_per_sec": 0, 00:05:12.284 "w_mbytes_per_sec": 0 00:05:12.284 }, 00:05:12.284 "claimed": false, 00:05:12.284 "zoned": false, 00:05:12.284 "supported_io_types": { 00:05:12.284 "read": true, 00:05:12.284 "write": true, 00:05:12.284 "unmap": true, 00:05:12.284 "write_zeroes": true, 00:05:12.284 "flush": true, 00:05:12.284 "reset": true, 00:05:12.284 "compare": false, 00:05:12.284 "compare_and_write": false, 00:05:12.284 "abort": true, 00:05:12.284 "nvme_admin": false, 00:05:12.284 "nvme_io": false 00:05:12.284 }, 00:05:12.284 "memory_domains": [ 00:05:12.284 { 00:05:12.284 "dma_device_id": "system", 00:05:12.284 "dma_device_type": 1 00:05:12.284 }, 00:05:12.284 { 00:05:12.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.284 "dma_device_type": 2 00:05:12.284 } 00:05:12.284 ], 00:05:12.284 "driver_specific": { 00:05:12.284 "passthru": { 00:05:12.284 "name": "Passthru0", 00:05:12.284 "base_bdev_name": "Malloc0" 00:05:12.284 } 00:05:12.284 } 00:05:12.284 } 00:05:12.284 ]' 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.284 10:26:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.284 00:05:12.284 real 0m0.239s 00:05:12.284 user 0m0.157s 00:05:12.284 sys 0m0.027s 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 ************************************ 00:05:12.284 END TEST rpc_integrity 00:05:12.284 ************************************ 00:05:12.284 10:26:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.284 10:26:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.284 10:26:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.284 10:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 ************************************ 00:05:12.284 START TEST rpc_plugins 00:05:12.284 ************************************ 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:12.284 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.284 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.284 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.284 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.284 { 00:05:12.284 "name": "Malloc1", 00:05:12.284 "aliases": [ 00:05:12.284 "73a6d00b-9ce4-48e4-a3fa-fc7b9d5c00f4" 00:05:12.284 ], 00:05:12.284 "product_name": "Malloc disk", 00:05:12.284 "block_size": 4096, 00:05:12.284 "num_blocks": 256, 00:05:12.284 "uuid": "73a6d00b-9ce4-48e4-a3fa-fc7b9d5c00f4", 00:05:12.284 "assigned_rate_limits": { 00:05:12.284 "rw_ios_per_sec": 0, 00:05:12.284 "rw_mbytes_per_sec": 0, 00:05:12.284 "r_mbytes_per_sec": 0, 00:05:12.284 "w_mbytes_per_sec": 0 00:05:12.284 }, 00:05:12.284 "claimed": false, 00:05:12.284 "zoned": false, 00:05:12.284 "supported_io_types": { 00:05:12.284 "read": true, 00:05:12.284 "write": true, 00:05:12.284 "unmap": true, 00:05:12.284 "write_zeroes": true, 00:05:12.284 "flush": true, 00:05:12.284 "reset": true, 00:05:12.284 "compare": false, 00:05:12.284 "compare_and_write": false, 00:05:12.284 "abort": true, 00:05:12.284 "nvme_admin": false, 00:05:12.284 "nvme_io": false 00:05:12.284 }, 00:05:12.284 "memory_domains": [ 00:05:12.284 { 00:05:12.284 "dma_device_id": "system", 00:05:12.284 "dma_device_type": 1 00:05:12.284 }, 00:05:12.285 { 00:05:12.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.285 "dma_device_type": 2 00:05:12.285 } 00:05:12.285 ], 00:05:12.285 "driver_specific": {} 00:05:12.285 } 00:05:12.285 ]' 00:05:12.285 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.543 10:26:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.543 00:05:12.543 real 0m0.129s 00:05:12.543 user 0m0.082s 00:05:12.543 sys 0m0.013s 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.543 10:26:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 ************************************ 00:05:12.543 END TEST rpc_plugins 00:05:12.543 ************************************ 00:05:12.543 10:26:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.543 10:26:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.543 10:26:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.543 10:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 ************************************ 00:05:12.543 START TEST rpc_trace_cmd_test 00:05:12.543 ************************************ 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.543 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3719153", 00:05:12.543 "tpoint_group_mask": "0x8", 00:05:12.543 "iscsi_conn": { 00:05:12.543 "mask": "0x2", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "scsi": { 00:05:12.543 "mask": "0x4", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "bdev": { 00:05:12.543 "mask": "0x8", 00:05:12.543 "tpoint_mask": "0xffffffffffffffff" 00:05:12.543 }, 00:05:12.543 "nvmf_rdma": { 00:05:12.543 "mask": "0x10", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "nvmf_tcp": { 00:05:12.543 "mask": "0x20", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "ftl": { 00:05:12.543 "mask": "0x40", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "blobfs": { 00:05:12.543 "mask": "0x80", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "dsa": { 00:05:12.543 "mask": "0x200", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "thread": { 00:05:12.543 "mask": "0x400", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "nvme_pcie": { 00:05:12.543 "mask": "0x800", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "iaa": { 00:05:12.543 "mask": "0x1000", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "nvme_tcp": { 00:05:12.543 "mask": "0x2000", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "bdev_nvme": { 00:05:12.543 "mask": "0x4000", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 }, 00:05:12.543 "sock": { 00:05:12.543 "mask": "0x8000", 00:05:12.543 "tpoint_mask": "0x0" 00:05:12.543 } 00:05:12.543 }' 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:12.543 10:26:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.543 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.543 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.802 00:05:12.802 real 0m0.213s 00:05:12.802 user 0m0.185s 00:05:12.802 sys 0m0.020s 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.802 10:26:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 ************************************ 00:05:12.802 END TEST rpc_trace_cmd_test 00:05:12.802 ************************************ 00:05:12.802 10:26:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.802 10:26:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.802 10:26:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.802 10:26:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.802 10:26:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.802 10:26:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 ************************************ 00:05:12.802 START TEST rpc_daemon_integrity 00:05:12.802 ************************************ 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.802 { 00:05:12.802 "name": "Malloc2", 00:05:12.802 "aliases": [ 00:05:12.802 "07eac156-64ae-4be9-884d-3e5537b29a9e" 00:05:12.802 ], 00:05:12.802 "product_name": "Malloc disk", 00:05:12.802 "block_size": 512, 00:05:12.802 "num_blocks": 16384, 00:05:12.802 "uuid": "07eac156-64ae-4be9-884d-3e5537b29a9e", 00:05:12.802 "assigned_rate_limits": { 00:05:12.802 "rw_ios_per_sec": 0, 00:05:12.802 "rw_mbytes_per_sec": 0, 00:05:12.802 "r_mbytes_per_sec": 0, 00:05:12.802 "w_mbytes_per_sec": 0 00:05:12.802 }, 00:05:12.802 "claimed": false, 00:05:12.802 "zoned": false, 00:05:12.802 "supported_io_types": { 00:05:12.802 "read": true, 00:05:12.802 "write": true, 00:05:12.802 "unmap": true, 00:05:12.802 "write_zeroes": true, 00:05:12.802 "flush": true, 00:05:12.802 "reset": true, 00:05:12.802 "compare": false, 00:05:12.802 "compare_and_write": false, 00:05:12.802 "abort": true, 00:05:12.802 "nvme_admin": false, 00:05:12.802 "nvme_io": false 00:05:12.802 }, 00:05:12.802 "memory_domains": [ 00:05:12.802 { 00:05:12.802 "dma_device_id": "system", 00:05:12.802 "dma_device_type": 1 00:05:12.802 }, 00:05:12.802 { 00:05:12.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.802 "dma_device_type": 2 00:05:12.802 } 00:05:12.802 ], 00:05:12.802 "driver_specific": {} 00:05:12.802 } 00:05:12.802 ]' 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.802 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 [2024-07-23 10:26:01.302958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.802 [2024-07-23 10:26:01.303008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.802 [2024-07-23 10:26:01.303038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15fc2f0 00:05:12.802 [2024-07-23 10:26:01.303054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.802 [2024-07-23 10:26:01.304670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.802 [2024-07-23 10:26:01.304699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.061 Passthru0 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.061 { 00:05:13.061 "name": "Malloc2", 00:05:13.061 "aliases": [ 00:05:13.061 "07eac156-64ae-4be9-884d-3e5537b29a9e" 00:05:13.061 ], 00:05:13.061 "product_name": "Malloc disk", 00:05:13.061 "block_size": 512, 00:05:13.061 "num_blocks": 16384, 00:05:13.061 "uuid": "07eac156-64ae-4be9-884d-3e5537b29a9e", 00:05:13.061 "assigned_rate_limits": { 00:05:13.061 "rw_ios_per_sec": 0, 00:05:13.061 "rw_mbytes_per_sec": 0, 00:05:13.061 "r_mbytes_per_sec": 0, 00:05:13.061 "w_mbytes_per_sec": 0 00:05:13.061 }, 00:05:13.061 "claimed": true, 00:05:13.061 "claim_type": "exclusive_write", 00:05:13.061 "zoned": false, 00:05:13.061 "supported_io_types": { 00:05:13.061 "read": true, 00:05:13.061 "write": true, 00:05:13.061 "unmap": true, 00:05:13.061 "write_zeroes": true, 00:05:13.061 "flush": true, 00:05:13.061 "reset": true, 00:05:13.061 "compare": false, 00:05:13.061 "compare_and_write": false, 00:05:13.061 "abort": true, 00:05:13.061 "nvme_admin": false, 00:05:13.061 "nvme_io": false 00:05:13.061 }, 00:05:13.061 "memory_domains": [ 00:05:13.061 { 00:05:13.061 "dma_device_id": "system", 00:05:13.061 "dma_device_type": 1 00:05:13.061 }, 00:05:13.061 { 00:05:13.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.061 "dma_device_type": 2 00:05:13.061 } 00:05:13.061 ], 00:05:13.061 "driver_specific": {} 00:05:13.061 }, 00:05:13.061 { 00:05:13.061 "name": "Passthru0", 00:05:13.061 "aliases": [ 00:05:13.061 "880a7db4-ca68-5435-a30f-87bd566ef650" 00:05:13.061 ], 00:05:13.061 "product_name": "passthru", 00:05:13.061 "block_size": 512, 00:05:13.061 "num_blocks": 16384, 00:05:13.061 "uuid": "880a7db4-ca68-5435-a30f-87bd566ef650", 00:05:13.061 "assigned_rate_limits": { 00:05:13.061 "rw_ios_per_sec": 0, 00:05:13.061 "rw_mbytes_per_sec": 0, 00:05:13.061 "r_mbytes_per_sec": 0, 00:05:13.061 "w_mbytes_per_sec": 0 00:05:13.061 }, 00:05:13.061 "claimed": false, 00:05:13.061 "zoned": false, 00:05:13.061 "supported_io_types": { 00:05:13.061 "read": true, 00:05:13.061 "write": true, 00:05:13.061 "unmap": true, 00:05:13.061 "write_zeroes": true, 00:05:13.061 "flush": true, 00:05:13.061 "reset": true, 00:05:13.061 "compare": false, 00:05:13.061 "compare_and_write": false, 00:05:13.061 "abort": true, 00:05:13.061 "nvme_admin": false, 00:05:13.061 "nvme_io": false 00:05:13.061 }, 00:05:13.061 "memory_domains": [ 00:05:13.061 { 00:05:13.061 "dma_device_id": "system", 00:05:13.061 "dma_device_type": 1 00:05:13.061 }, 00:05:13.061 { 00:05:13.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.061 "dma_device_type": 2 00:05:13.061 } 00:05:13.061 ], 00:05:13.061 "driver_specific": { 00:05:13.061 "passthru": { 00:05:13.061 "name": "Passthru0", 00:05:13.061 "base_bdev_name": "Malloc2" 00:05:13.061 } 00:05:13.061 } 00:05:13.061 } 00:05:13.061 ]' 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.061 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.062 00:05:13.062 real 0m0.250s 00:05:13.062 user 0m0.164s 00:05:13.062 sys 0m0.026s 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.062 10:26:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 ************************************ 00:05:13.062 END TEST rpc_daemon_integrity 00:05:13.062 ************************************ 00:05:13.062 10:26:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.062 10:26:01 rpc -- rpc/rpc.sh@84 -- # killprocess 3719153 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@946 -- # '[' -z 3719153 ']' 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@950 -- # kill -0 3719153 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@951 -- # uname 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3719153 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3719153' 00:05:13.062 killing process with pid 3719153 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@965 -- # kill 3719153 00:05:13.062 10:26:01 rpc -- common/autotest_common.sh@970 -- # wait 3719153 00:05:13.321 00:05:13.321 real 0m1.796s 00:05:13.321 user 0m2.383s 00:05:13.321 sys 0m0.582s 00:05:13.321 10:26:01 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.321 10:26:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.321 ************************************ 00:05:13.321 END TEST rpc 00:05:13.321 ************************************ 00:05:13.321 10:26:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.321 10:26:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.321 10:26:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.321 10:26:01 -- common/autotest_common.sh@10 -- # set +x 00:05:13.321 ************************************ 00:05:13.321 START TEST skip_rpc 00:05:13.321 ************************************ 00:05:13.321 10:26:01 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.580 * Looking for test storage... 00:05:13.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.580 10:26:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.580 10:26:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.580 10:26:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.580 10:26:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.580 10:26:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.580 10:26:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.580 ************************************ 00:05:13.580 START TEST skip_rpc 00:05:13.580 ************************************ 00:05:13.580 10:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:13.580 10:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3719515 00:05:13.580 10:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.580 10:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.580 10:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.580 [2024-07-23 10:26:01.940027] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:13.580 [2024-07-23 10:26:01.940121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719515 ] 00:05:13.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.580 [2024-07-23 10:26:02.000770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.838 [2024-07-23 10:26:02.092157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3719515 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3719515 ']' 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3719515 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3719515 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3719515' 00:05:19.103 killing process with pid 3719515 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3719515 00:05:19.103 10:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3719515 00:05:19.103 00:05:19.103 real 0m5.293s 00:05:19.103 user 0m5.013s 00:05:19.103 sys 0m0.271s 00:05:19.103 10:26:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.103 10:26:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 ************************************ 00:05:19.103 END TEST skip_rpc 00:05:19.103 ************************************ 00:05:19.103 10:26:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.103 10:26:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.103 10:26:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.103 10:26:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 ************************************ 00:05:19.103 START TEST skip_rpc_with_json 00:05:19.103 ************************************ 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3720048 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3720048 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3720048 ']' 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.103 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.103 [2024-07-23 10:26:07.284937] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:19.103 [2024-07-23 10:26:07.285040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720048 ] 00:05:19.103 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.103 [2024-07-23 10:26:07.344332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.103 [2024-07-23 10:26:07.431985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 [2024-07-23 10:26:07.656225] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.363 request: 00:05:19.363 { 00:05:19.363 "trtype": "tcp", 00:05:19.363 "method": "nvmf_get_transports", 00:05:19.363 "req_id": 1 00:05:19.363 } 00:05:19.363 Got JSON-RPC error response 00:05:19.363 response: 00:05:19.363 { 00:05:19.363 "code": -19, 00:05:19.363 "message": "No such device" 00:05:19.363 } 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 [2024-07-23 10:26:07.664341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.363 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.363 { 00:05:19.363 "subsystems": [ 00:05:19.363 { 00:05:19.363 "subsystem": "vfio_user_target", 00:05:19.363 "config": null 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "keyring", 00:05:19.363 "config": [] 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "iobuf", 00:05:19.363 "config": [ 00:05:19.363 { 00:05:19.363 "method": "iobuf_set_options", 00:05:19.363 "params": { 00:05:19.363 "small_pool_count": 8192, 00:05:19.363 "large_pool_count": 1024, 00:05:19.363 "small_bufsize": 8192, 00:05:19.363 "large_bufsize": 135168 00:05:19.363 } 00:05:19.363 } 00:05:19.363 ] 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "sock", 00:05:19.363 "config": [ 00:05:19.363 { 00:05:19.363 "method": "sock_set_default_impl", 00:05:19.363 "params": { 00:05:19.363 "impl_name": "posix" 00:05:19.363 } 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "method": "sock_impl_set_options", 00:05:19.363 "params": { 00:05:19.363 "impl_name": "ssl", 00:05:19.363 "recv_buf_size": 4096, 00:05:19.363 "send_buf_size": 4096, 00:05:19.363 "enable_recv_pipe": true, 00:05:19.363 "enable_quickack": false, 00:05:19.363 "enable_placement_id": 0, 00:05:19.363 "enable_zerocopy_send_server": true, 00:05:19.363 "enable_zerocopy_send_client": false, 00:05:19.363 "zerocopy_threshold": 0, 00:05:19.363 "tls_version": 0, 00:05:19.363 "enable_ktls": false 00:05:19.363 } 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "method": "sock_impl_set_options", 00:05:19.363 "params": { 00:05:19.363 "impl_name": "posix", 00:05:19.363 "recv_buf_size": 2097152, 00:05:19.363 "send_buf_size": 2097152, 00:05:19.363 "enable_recv_pipe": true, 00:05:19.363 "enable_quickack": false, 00:05:19.363 "enable_placement_id": 0, 00:05:19.363 "enable_zerocopy_send_server": true, 00:05:19.363 "enable_zerocopy_send_client": false, 00:05:19.363 "zerocopy_threshold": 0, 00:05:19.363 "tls_version": 0, 00:05:19.363 "enable_ktls": false 00:05:19.363 } 00:05:19.363 } 00:05:19.363 ] 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "vmd", 00:05:19.363 "config": [] 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "accel", 00:05:19.363 "config": [ 00:05:19.363 { 00:05:19.363 "method": "accel_set_options", 00:05:19.363 "params": { 00:05:19.363 "small_cache_size": 128, 00:05:19.363 "large_cache_size": 16, 00:05:19.363 "task_count": 2048, 00:05:19.363 "sequence_count": 2048, 00:05:19.363 "buf_count": 2048 00:05:19.363 } 00:05:19.363 } 00:05:19.363 ] 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "subsystem": "bdev", 00:05:19.363 "config": [ 00:05:19.363 { 00:05:19.363 "method": "bdev_set_options", 00:05:19.363 "params": { 00:05:19.363 "bdev_io_pool_size": 65535, 00:05:19.363 "bdev_io_cache_size": 256, 00:05:19.363 "bdev_auto_examine": true, 00:05:19.363 "iobuf_small_cache_size": 128, 00:05:19.363 "iobuf_large_cache_size": 16 00:05:19.363 } 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "method": "bdev_raid_set_options", 00:05:19.363 "params": { 00:05:19.363 "process_window_size_kb": 1024 00:05:19.363 } 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "method": "bdev_iscsi_set_options", 00:05:19.363 "params": { 00:05:19.363 "timeout_sec": 30 00:05:19.363 } 00:05:19.363 }, 00:05:19.363 { 00:05:19.363 "method": "bdev_nvme_set_options", 00:05:19.363 "params": { 00:05:19.363 "action_on_timeout": "none", 00:05:19.363 "timeout_us": 0, 00:05:19.363 "timeout_admin_us": 0, 00:05:19.363 "keep_alive_timeout_ms": 10000, 00:05:19.363 "arbitration_burst": 0, 00:05:19.363 "low_priority_weight": 0, 00:05:19.363 "medium_priority_weight": 0, 00:05:19.363 "high_priority_weight": 0, 00:05:19.363 "nvme_adminq_poll_period_us": 10000, 00:05:19.363 "nvme_ioq_poll_period_us": 0, 00:05:19.363 "io_queue_requests": 0, 00:05:19.363 "delay_cmd_submit": true, 00:05:19.363 "transport_retry_count": 4, 00:05:19.363 "bdev_retry_count": 3, 00:05:19.363 "transport_ack_timeout": 0, 00:05:19.363 "ctrlr_loss_timeout_sec": 0, 00:05:19.363 "reconnect_delay_sec": 0, 00:05:19.363 "fast_io_fail_timeout_sec": 0, 00:05:19.363 "disable_auto_failback": false, 00:05:19.364 "generate_uuids": false, 00:05:19.364 "transport_tos": 0, 00:05:19.364 "nvme_error_stat": false, 00:05:19.364 "rdma_srq_size": 0, 00:05:19.364 "io_path_stat": false, 00:05:19.364 "allow_accel_sequence": false, 00:05:19.364 "rdma_max_cq_size": 0, 00:05:19.364 "rdma_cm_event_timeout_ms": 0, 00:05:19.364 "dhchap_digests": [ 00:05:19.364 "sha256", 00:05:19.364 "sha384", 00:05:19.364 "sha512" 00:05:19.364 ], 00:05:19.364 "dhchap_dhgroups": [ 00:05:19.364 "null", 00:05:19.364 "ffdhe2048", 00:05:19.364 "ffdhe3072", 00:05:19.364 "ffdhe4096", 00:05:19.364 "ffdhe6144", 00:05:19.364 "ffdhe8192" 00:05:19.364 ] 00:05:19.364 } 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "method": "bdev_nvme_set_hotplug", 00:05:19.364 "params": { 00:05:19.364 "period_us": 100000, 00:05:19.364 "enable": false 00:05:19.364 } 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "method": "bdev_wait_for_examine" 00:05:19.364 } 00:05:19.364 ] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "scsi", 00:05:19.364 "config": null 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "scheduler", 00:05:19.364 "config": [ 00:05:19.364 { 00:05:19.364 "method": "framework_set_scheduler", 00:05:19.364 "params": { 00:05:19.364 "name": "static" 00:05:19.364 } 00:05:19.364 } 00:05:19.364 ] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "vhost_scsi", 00:05:19.364 "config": [] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "vhost_blk", 00:05:19.364 "config": [] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "ublk", 00:05:19.364 "config": [] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "nbd", 00:05:19.364 "config": [] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "nvmf", 00:05:19.364 "config": [ 00:05:19.364 { 00:05:19.364 "method": "nvmf_set_config", 00:05:19.364 "params": { 00:05:19.364 "discovery_filter": "match_any", 00:05:19.364 "admin_cmd_passthru": { 00:05:19.364 "identify_ctrlr": false 00:05:19.364 } 00:05:19.364 } 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "method": "nvmf_set_max_subsystems", 00:05:19.364 "params": { 00:05:19.364 "max_subsystems": 1024 00:05:19.364 } 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "method": "nvmf_set_crdt", 00:05:19.364 "params": { 00:05:19.364 "crdt1": 0, 00:05:19.364 "crdt2": 0, 00:05:19.364 "crdt3": 0 00:05:19.364 } 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "method": "nvmf_create_transport", 00:05:19.364 "params": { 00:05:19.364 "trtype": "TCP", 00:05:19.364 "max_queue_depth": 128, 00:05:19.364 "max_io_qpairs_per_ctrlr": 127, 00:05:19.364 "in_capsule_data_size": 4096, 00:05:19.364 "max_io_size": 131072, 00:05:19.364 "io_unit_size": 131072, 00:05:19.364 "max_aq_depth": 128, 00:05:19.364 "num_shared_buffers": 511, 00:05:19.364 "buf_cache_size": 4294967295, 00:05:19.364 "dif_insert_or_strip": false, 00:05:19.364 "zcopy": false, 00:05:19.364 "c2h_success": true, 00:05:19.364 "sock_priority": 0, 00:05:19.364 "abort_timeout_sec": 1, 00:05:19.364 "ack_timeout": 0, 00:05:19.364 "data_wr_pool_size": 0 00:05:19.364 } 00:05:19.364 } 00:05:19.364 ] 00:05:19.364 }, 00:05:19.364 { 00:05:19.364 "subsystem": "iscsi", 00:05:19.364 "config": [ 00:05:19.364 { 00:05:19.364 "method": "iscsi_set_options", 00:05:19.364 "params": { 00:05:19.364 "node_base": "iqn.2016-06.io.spdk", 00:05:19.364 "max_sessions": 128, 00:05:19.364 "max_connections_per_session": 2, 00:05:19.364 "max_queue_depth": 64, 00:05:19.364 "default_time2wait": 2, 00:05:19.364 "default_time2retain": 20, 00:05:19.364 "first_burst_length": 8192, 00:05:19.364 "immediate_data": true, 00:05:19.364 "allow_duplicated_isid": false, 00:05:19.364 "error_recovery_level": 0, 00:05:19.364 "nop_timeout": 60, 00:05:19.364 "nop_in_interval": 30, 00:05:19.364 "disable_chap": false, 00:05:19.364 "require_chap": false, 00:05:19.364 "mutual_chap": false, 00:05:19.364 "chap_group": 0, 00:05:19.364 "max_large_datain_per_connection": 64, 00:05:19.364 "max_r2t_per_connection": 4, 00:05:19.364 "pdu_pool_size": 36864, 00:05:19.364 "immediate_data_pool_size": 16384, 00:05:19.364 "data_out_pool_size": 2048 00:05:19.364 } 00:05:19.364 } 00:05:19.364 ] 00:05:19.364 } 00:05:19.364 ] 00:05:19.364 } 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3720048 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3720048 ']' 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3720048 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720048 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720048' 00:05:19.364 killing process with pid 3720048 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3720048 00:05:19.364 10:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3720048 00:05:19.623 10:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3720068 00:05:19.623 10:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.623 10:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3720068 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3720068 ']' 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3720068 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720068 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720068' 00:05:24.889 killing process with pid 3720068 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3720068 00:05:24.889 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3720068 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.148 00:05:25.148 real 0m6.195s 00:05:25.148 user 0m5.880s 00:05:25.148 sys 0m0.630s 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.148 ************************************ 00:05:25.148 END TEST skip_rpc_with_json 00:05:25.148 ************************************ 00:05:25.148 10:26:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.148 ************************************ 00:05:25.148 START TEST skip_rpc_with_delay 00:05:25.148 ************************************ 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.148 [2024-07-23 10:26:13.537754] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:25.148 [2024-07-23 10:26:13.537887] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.148 00:05:25.148 real 0m0.076s 00:05:25.148 user 0m0.044s 00:05:25.148 sys 0m0.032s 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.148 10:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:25.148 ************************************ 00:05:25.148 END TEST skip_rpc_with_delay 00:05:25.148 ************************************ 00:05:25.148 10:26:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:25.148 10:26:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:25.148 10:26:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.148 10:26:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.148 ************************************ 00:05:25.148 START TEST exit_on_failed_rpc_init 00:05:25.148 ************************************ 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3720620 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3720620 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3720620 ']' 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.148 10:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.406 [2024-07-23 10:26:13.666547] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:25.406 [2024-07-23 10:26:13.666630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720620 ] 00:05:25.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.406 [2024-07-23 10:26:13.727292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.406 [2024-07-23 10:26:13.818888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.665 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.665 [2024-07-23 10:26:14.098323] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:25.665 [2024-07-23 10:26:14.098417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720714 ] 00:05:25.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.665 [2024-07-23 10:26:14.158839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.924 [2024-07-23 10:26:14.250244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.924 [2024-07-23 10:26:14.250376] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.924 [2024-07-23 10:26:14.250397] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.924 [2024-07-23 10:26:14.250411] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3720620 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3720620 ']' 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3720620 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720620 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720620' 00:05:25.924 killing process with pid 3720620 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3720620 00:05:25.924 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3720620 00:05:26.183 00:05:26.183 real 0m1.014s 00:05:26.183 user 0m1.179s 00:05:26.183 sys 0m0.426s 00:05:26.183 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.183 10:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.183 ************************************ 00:05:26.183 END TEST exit_on_failed_rpc_init 00:05:26.183 ************************************ 00:05:26.183 10:26:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.183 00:05:26.183 real 0m12.851s 00:05:26.183 user 0m12.236s 00:05:26.183 sys 0m1.527s 00:05:26.183 10:26:14 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.183 10:26:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.183 ************************************ 00:05:26.183 END TEST skip_rpc 00:05:26.183 ************************************ 00:05:26.183 10:26:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.183 10:26:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.183 10:26:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.183 10:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:26.442 ************************************ 00:05:26.442 START TEST rpc_client 00:05:26.442 ************************************ 00:05:26.442 10:26:14 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.442 * Looking for test storage... 00:05:26.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:26.442 10:26:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:26.442 OK 00:05:26.442 10:26:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.442 00:05:26.442 real 0m0.067s 00:05:26.442 user 0m0.033s 00:05:26.442 sys 0m0.038s 00:05:26.442 10:26:14 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.442 10:26:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.442 ************************************ 00:05:26.442 END TEST rpc_client 00:05:26.442 ************************************ 00:05:26.442 10:26:14 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.442 10:26:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.442 10:26:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.442 10:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:26.442 ************************************ 00:05:26.442 START TEST json_config 00:05:26.442 ************************************ 00:05:26.442 10:26:14 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.442 10:26:14 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.442 10:26:14 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.442 10:26:14 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.442 10:26:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.442 10:26:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.442 10:26:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.442 10:26:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.442 10:26:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.442 10:26:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.442 10:26:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:26.443 INFO: JSON configuration test init 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.443 10:26:14 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.443 10:26:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.443 10:26:14 json_config -- json_config/common.sh@10 -- # shift 00:05:26.443 10:26:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.443 10:26:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.443 10:26:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.443 10:26:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.443 10:26:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.443 10:26:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3720921 00:05:26.443 10:26:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.443 10:26:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.443 Waiting for target to run... 00:05:26.443 10:26:14 json_config -- json_config/common.sh@25 -- # waitforlisten 3720921 /var/tmp/spdk_tgt.sock 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@827 -- # '[' -z 3720921 ']' 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.443 10:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.443 [2024-07-23 10:26:14.928058] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:26.443 [2024-07-23 10:26:14.928160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720921 ] 00:05:26.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.960 [2024-07-23 10:26:15.227166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.960 [2024-07-23 10:26:15.293461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:27.527 10:26:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.527 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.527 10:26:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.527 10:26:15 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:27.527 10:26:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.813 10:26:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:30.813 10:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:30.813 10:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.813 10:26:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:31.072 10:26:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.072 10:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:31.072 10:26:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.072 10:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:31.072 10:26:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.072 10:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.330 MallocForNvmf0 00:05:31.330 10:26:19 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.330 10:26:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.589 MallocForNvmf1 00:05:31.589 10:26:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.589 10:26:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.181 [2024-07-23 10:26:20.366283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.181 10:26:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.181 10:26:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.463 10:26:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.463 10:26:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.722 10:26:20 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.722 10:26:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.980 10:26:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.980 10:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.238 [2024-07-23 10:26:21.537998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.238 10:26:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:33.238 10:26:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.238 10:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.238 10:26:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:33.238 10:26:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.238 10:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.238 10:26:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:33.238 10:26:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.238 10:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.497 MallocBdevForConfigChangeCheck 00:05:33.497 10:26:21 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:33.497 10:26:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.497 10:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.497 10:26:21 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:33.497 10:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.063 10:26:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:34.063 INFO: shutting down applications... 00:05:34.063 10:26:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:34.063 10:26:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:34.063 10:26:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:34.063 10:26:22 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.962 Calling clear_iscsi_subsystem 00:05:35.962 Calling clear_nvmf_subsystem 00:05:35.962 Calling clear_nbd_subsystem 00:05:35.962 Calling clear_ublk_subsystem 00:05:35.962 Calling clear_vhost_blk_subsystem 00:05:35.962 Calling clear_vhost_scsi_subsystem 00:05:35.962 Calling clear_bdev_subsystem 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.962 10:26:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.962 10:26:24 json_config -- json_config/json_config.sh@345 -- # break 00:05:35.962 10:26:24 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:35.962 10:26:24 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:35.962 10:26:24 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.962 10:26:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.962 10:26:24 json_config -- json_config/common.sh@35 -- # [[ -n 3720921 ]] 00:05:35.962 10:26:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3720921 00:05:35.962 10:26:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.962 10:26:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.962 10:26:24 json_config -- json_config/common.sh@41 -- # kill -0 3720921 00:05:35.962 10:26:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.529 10:26:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.529 10:26:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.529 10:26:24 json_config -- json_config/common.sh@41 -- # kill -0 3720921 00:05:36.529 10:26:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.529 10:26:24 json_config -- json_config/common.sh@43 -- # break 00:05:36.529 10:26:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.529 10:26:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.529 SPDK target shutdown done 00:05:36.529 10:26:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.529 INFO: relaunching applications... 00:05:36.529 10:26:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.529 10:26:24 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.529 10:26:24 json_config -- json_config/common.sh@10 -- # shift 00:05:36.529 10:26:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.529 10:26:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.529 10:26:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.529 10:26:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.529 10:26:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.529 10:26:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3721954 00:05:36.529 10:26:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.529 10:26:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.529 Waiting for target to run... 00:05:36.529 10:26:24 json_config -- json_config/common.sh@25 -- # waitforlisten 3721954 /var/tmp/spdk_tgt.sock 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@827 -- # '[' -z 3721954 ']' 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.529 10:26:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.530 [2024-07-23 10:26:24.976966] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:36.530 [2024-07-23 10:26:24.977067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721954 ] 00:05:36.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.788 [2024-07-23 10:26:25.280354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.047 [2024-07-23 10:26:25.346905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.340 [2024-07-23 10:26:28.360385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.340 [2024-07-23 10:26:28.392767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.340 10:26:28 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.340 10:26:28 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:40.340 10:26:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:40.340 00:05:40.340 10:26:28 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:40.340 10:26:28 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.340 INFO: Checking if target configuration is the same... 00:05:40.340 10:26:28 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.340 10:26:28 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:40.340 10:26:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.340 + '[' 2 -ne 2 ']' 00:05:40.340 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.340 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.340 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.340 +++ basename /dev/fd/62 00:05:40.340 ++ mktemp /tmp/62.XXX 00:05:40.340 + tmp_file_1=/tmp/62.lgq 00:05:40.340 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.340 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.340 + tmp_file_2=/tmp/spdk_tgt_config.json.SU5 00:05:40.340 + ret=0 00:05:40.340 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.598 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.598 + diff -u /tmp/62.lgq /tmp/spdk_tgt_config.json.SU5 00:05:40.598 + echo 'INFO: JSON config files are the same' 00:05:40.598 INFO: JSON config files are the same 00:05:40.598 + rm /tmp/62.lgq /tmp/spdk_tgt_config.json.SU5 00:05:40.598 + exit 0 00:05:40.598 10:26:28 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:40.598 10:26:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.598 INFO: changing configuration and checking if this can be detected... 00:05:40.598 10:26:28 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.598 10:26:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.856 10:26:29 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.856 10:26:29 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:40.856 10:26:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.856 + '[' 2 -ne 2 ']' 00:05:40.856 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.856 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.856 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.856 +++ basename /dev/fd/62 00:05:40.856 ++ mktemp /tmp/62.XXX 00:05:40.856 + tmp_file_1=/tmp/62.bPG 00:05:40.856 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.856 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.856 + tmp_file_2=/tmp/spdk_tgt_config.json.XHb 00:05:40.856 + ret=0 00:05:40.856 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.423 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.423 + diff -u /tmp/62.bPG /tmp/spdk_tgt_config.json.XHb 00:05:41.423 + ret=1 00:05:41.423 + echo '=== Start of file: /tmp/62.bPG ===' 00:05:41.423 + cat /tmp/62.bPG 00:05:41.423 + echo '=== End of file: /tmp/62.bPG ===' 00:05:41.423 + echo '' 00:05:41.423 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XHb ===' 00:05:41.423 + cat /tmp/spdk_tgt_config.json.XHb 00:05:41.423 + echo '=== End of file: /tmp/spdk_tgt_config.json.XHb ===' 00:05:41.423 + echo '' 00:05:41.423 + rm /tmp/62.bPG /tmp/spdk_tgt_config.json.XHb 00:05:41.423 + exit 1 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:41.423 INFO: configuration change detected. 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@317 -- # [[ -n 3721954 ]] 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 10:26:29 json_config -- json_config/json_config.sh@323 -- # killprocess 3721954 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@946 -- # '[' -z 3721954 ']' 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@950 -- # kill -0 3721954 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@951 -- # uname 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3721954 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3721954' 00:05:41.423 killing process with pid 3721954 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@965 -- # kill 3721954 00:05:41.423 10:26:29 json_config -- common/autotest_common.sh@970 -- # wait 3721954 00:05:42.799 10:26:31 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.799 10:26:31 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:42.799 10:26:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.799 10:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.058 10:26:31 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:43.058 10:26:31 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:43.058 INFO: Success 00:05:43.058 00:05:43.058 real 0m16.496s 00:05:43.058 user 0m19.294s 00:05:43.058 sys 0m1.875s 00:05:43.058 10:26:31 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.058 10:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.058 ************************************ 00:05:43.058 END TEST json_config 00:05:43.058 ************************************ 00:05:43.058 10:26:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.058 10:26:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.058 10:26:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.058 10:26:31 -- common/autotest_common.sh@10 -- # set +x 00:05:43.058 ************************************ 00:05:43.058 START TEST json_config_extra_key 00:05:43.058 ************************************ 00:05:43.058 10:26:31 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.058 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.058 10:26:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.059 10:26:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.059 10:26:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.059 10:26:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.059 10:26:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.059 10:26:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.059 10:26:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.059 10:26:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.059 10:26:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.059 10:26:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.059 INFO: launching applications... 00:05:43.059 10:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3722636 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.059 Waiting for target to run... 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.059 10:26:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3722636 /var/tmp/spdk_tgt.sock 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3722636 ']' 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.059 10:26:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.059 [2024-07-23 10:26:31.473358] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:43.059 [2024-07-23 10:26:31.473451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722636 ] 00:05:43.059 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.625 [2024-07-23 10:26:31.830318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.625 [2024-07-23 10:26:31.897664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.192 10:26:32 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.192 10:26:32 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.192 00:05:44.192 10:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.192 INFO: shutting down applications... 00:05:44.192 10:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3722636 ]] 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3722636 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3722636 00:05:44.192 10:26:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3722636 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.761 10:26:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.761 SPDK target shutdown done 00:05:44.761 10:26:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.761 Success 00:05:44.761 00:05:44.761 real 0m1.654s 00:05:44.761 user 0m1.522s 00:05:44.761 sys 0m0.465s 00:05:44.761 10:26:33 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.761 10:26:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.761 ************************************ 00:05:44.761 END TEST json_config_extra_key 00:05:44.761 ************************************ 00:05:44.761 10:26:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.761 10:26:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.761 10:26:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.761 10:26:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.761 ************************************ 00:05:44.761 START TEST alias_rpc 00:05:44.761 ************************************ 00:05:44.761 10:26:33 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.762 * Looking for test storage... 00:05:44.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.762 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.762 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3722831 00:05:44.762 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3722831 00:05:44.762 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3722831 ']' 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.762 10:26:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.762 [2024-07-23 10:26:33.172782] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:44.762 [2024-07-23 10:26:33.172879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722831 ] 00:05:44.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.762 [2024-07-23 10:26:33.235494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.019 [2024-07-23 10:26:33.323308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.277 10:26:33 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.277 10:26:33 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:45.277 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.536 10:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3722831 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3722831 ']' 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3722831 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3722831 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3722831' 00:05:45.536 killing process with pid 3722831 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@965 -- # kill 3722831 00:05:45.536 10:26:33 alias_rpc -- common/autotest_common.sh@970 -- # wait 3722831 00:05:45.795 00:05:45.795 real 0m1.093s 00:05:45.795 user 0m1.281s 00:05:45.795 sys 0m0.398s 00:05:45.795 10:26:34 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.795 10:26:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.795 ************************************ 00:05:45.795 END TEST alias_rpc 00:05:45.795 ************************************ 00:05:45.795 10:26:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:45.795 10:26:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.795 10:26:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.795 10:26:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.795 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.795 ************************************ 00:05:45.795 START TEST spdkcli_tcp 00:05:45.795 ************************************ 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.795 * Looking for test storage... 00:05:45.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3722984 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.795 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3722984 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3722984 ']' 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.795 10:26:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.053 [2024-07-23 10:26:34.319910] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:46.053 [2024-07-23 10:26:34.320013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722984 ] 00:05:46.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.053 [2024-07-23 10:26:34.380128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.053 [2024-07-23 10:26:34.468678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.053 [2024-07-23 10:26:34.468691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.312 10:26:34 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.312 10:26:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:46.312 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3723010 00:05:46.312 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.312 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.571 [ 00:05:46.571 "bdev_malloc_delete", 00:05:46.571 "bdev_malloc_create", 00:05:46.571 "bdev_null_resize", 00:05:46.571 "bdev_null_delete", 00:05:46.571 "bdev_null_create", 00:05:46.571 "bdev_nvme_cuse_unregister", 00:05:46.571 "bdev_nvme_cuse_register", 00:05:46.571 "bdev_opal_new_user", 00:05:46.571 "bdev_opal_set_lock_state", 00:05:46.571 "bdev_opal_delete", 00:05:46.571 "bdev_opal_get_info", 00:05:46.571 "bdev_opal_create", 00:05:46.571 "bdev_nvme_opal_revert", 00:05:46.571 "bdev_nvme_opal_init", 00:05:46.571 "bdev_nvme_send_cmd", 00:05:46.571 "bdev_nvme_get_path_iostat", 00:05:46.571 "bdev_nvme_get_mdns_discovery_info", 00:05:46.571 "bdev_nvme_stop_mdns_discovery", 00:05:46.571 "bdev_nvme_start_mdns_discovery", 00:05:46.571 "bdev_nvme_set_multipath_policy", 00:05:46.571 "bdev_nvme_set_preferred_path", 00:05:46.571 "bdev_nvme_get_io_paths", 00:05:46.571 "bdev_nvme_remove_error_injection", 00:05:46.571 "bdev_nvme_add_error_injection", 00:05:46.571 "bdev_nvme_get_discovery_info", 00:05:46.571 "bdev_nvme_stop_discovery", 00:05:46.571 "bdev_nvme_start_discovery", 00:05:46.571 "bdev_nvme_get_controller_health_info", 00:05:46.571 "bdev_nvme_disable_controller", 00:05:46.571 "bdev_nvme_enable_controller", 00:05:46.571 "bdev_nvme_reset_controller", 00:05:46.571 "bdev_nvme_get_transport_statistics", 00:05:46.571 "bdev_nvme_apply_firmware", 00:05:46.571 "bdev_nvme_detach_controller", 00:05:46.571 "bdev_nvme_get_controllers", 00:05:46.571 "bdev_nvme_attach_controller", 00:05:46.571 "bdev_nvme_set_hotplug", 00:05:46.571 "bdev_nvme_set_options", 00:05:46.571 "bdev_passthru_delete", 00:05:46.571 "bdev_passthru_create", 00:05:46.571 "bdev_lvol_set_parent_bdev", 00:05:46.571 "bdev_lvol_set_parent", 00:05:46.571 "bdev_lvol_check_shallow_copy", 00:05:46.571 "bdev_lvol_start_shallow_copy", 00:05:46.571 "bdev_lvol_grow_lvstore", 00:05:46.571 "bdev_lvol_get_lvols", 00:05:46.571 "bdev_lvol_get_lvstores", 00:05:46.571 "bdev_lvol_delete", 00:05:46.571 "bdev_lvol_set_read_only", 00:05:46.571 "bdev_lvol_resize", 00:05:46.571 "bdev_lvol_decouple_parent", 00:05:46.571 "bdev_lvol_inflate", 00:05:46.571 "bdev_lvol_rename", 00:05:46.571 "bdev_lvol_clone_bdev", 00:05:46.571 "bdev_lvol_clone", 00:05:46.571 "bdev_lvol_snapshot", 00:05:46.571 "bdev_lvol_create", 00:05:46.571 "bdev_lvol_delete_lvstore", 00:05:46.571 "bdev_lvol_rename_lvstore", 00:05:46.571 "bdev_lvol_create_lvstore", 00:05:46.571 "bdev_raid_set_options", 00:05:46.571 "bdev_raid_remove_base_bdev", 00:05:46.571 "bdev_raid_add_base_bdev", 00:05:46.571 "bdev_raid_delete", 00:05:46.571 "bdev_raid_create", 00:05:46.571 "bdev_raid_get_bdevs", 00:05:46.571 "bdev_error_inject_error", 00:05:46.571 "bdev_error_delete", 00:05:46.571 "bdev_error_create", 00:05:46.571 "bdev_split_delete", 00:05:46.571 "bdev_split_create", 00:05:46.571 "bdev_delay_delete", 00:05:46.571 "bdev_delay_create", 00:05:46.571 "bdev_delay_update_latency", 00:05:46.571 "bdev_zone_block_delete", 00:05:46.571 "bdev_zone_block_create", 00:05:46.571 "blobfs_create", 00:05:46.571 "blobfs_detect", 00:05:46.571 "blobfs_set_cache_size", 00:05:46.571 "bdev_aio_delete", 00:05:46.571 "bdev_aio_rescan", 00:05:46.571 "bdev_aio_create", 00:05:46.571 "bdev_ftl_set_property", 00:05:46.571 "bdev_ftl_get_properties", 00:05:46.571 "bdev_ftl_get_stats", 00:05:46.571 "bdev_ftl_unmap", 00:05:46.571 "bdev_ftl_unload", 00:05:46.571 "bdev_ftl_delete", 00:05:46.571 "bdev_ftl_load", 00:05:46.571 "bdev_ftl_create", 00:05:46.571 "bdev_virtio_attach_controller", 00:05:46.571 "bdev_virtio_scsi_get_devices", 00:05:46.571 "bdev_virtio_detach_controller", 00:05:46.571 "bdev_virtio_blk_set_hotplug", 00:05:46.571 "bdev_iscsi_delete", 00:05:46.571 "bdev_iscsi_create", 00:05:46.571 "bdev_iscsi_set_options", 00:05:46.571 "accel_error_inject_error", 00:05:46.571 "ioat_scan_accel_module", 00:05:46.571 "dsa_scan_accel_module", 00:05:46.571 "iaa_scan_accel_module", 00:05:46.571 "vfu_virtio_create_scsi_endpoint", 00:05:46.571 "vfu_virtio_scsi_remove_target", 00:05:46.571 "vfu_virtio_scsi_add_target", 00:05:46.571 "vfu_virtio_create_blk_endpoint", 00:05:46.571 "vfu_virtio_delete_endpoint", 00:05:46.572 "keyring_file_remove_key", 00:05:46.572 "keyring_file_add_key", 00:05:46.572 "keyring_linux_set_options", 00:05:46.572 "iscsi_get_histogram", 00:05:46.572 "iscsi_enable_histogram", 00:05:46.572 "iscsi_set_options", 00:05:46.572 "iscsi_get_auth_groups", 00:05:46.572 "iscsi_auth_group_remove_secret", 00:05:46.572 "iscsi_auth_group_add_secret", 00:05:46.572 "iscsi_delete_auth_group", 00:05:46.572 "iscsi_create_auth_group", 00:05:46.572 "iscsi_set_discovery_auth", 00:05:46.572 "iscsi_get_options", 00:05:46.572 "iscsi_target_node_request_logout", 00:05:46.572 "iscsi_target_node_set_redirect", 00:05:46.572 "iscsi_target_node_set_auth", 00:05:46.572 "iscsi_target_node_add_lun", 00:05:46.572 "iscsi_get_stats", 00:05:46.572 "iscsi_get_connections", 00:05:46.572 "iscsi_portal_group_set_auth", 00:05:46.572 "iscsi_start_portal_group", 00:05:46.572 "iscsi_delete_portal_group", 00:05:46.572 "iscsi_create_portal_group", 00:05:46.572 "iscsi_get_portal_groups", 00:05:46.572 "iscsi_delete_target_node", 00:05:46.572 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.572 "iscsi_target_node_add_pg_ig_maps", 00:05:46.572 "iscsi_create_target_node", 00:05:46.572 "iscsi_get_target_nodes", 00:05:46.572 "iscsi_delete_initiator_group", 00:05:46.572 "iscsi_initiator_group_remove_initiators", 00:05:46.572 "iscsi_initiator_group_add_initiators", 00:05:46.572 "iscsi_create_initiator_group", 00:05:46.572 "iscsi_get_initiator_groups", 00:05:46.572 "nvmf_set_crdt", 00:05:46.572 "nvmf_set_config", 00:05:46.572 "nvmf_set_max_subsystems", 00:05:46.572 "nvmf_stop_mdns_prr", 00:05:46.572 "nvmf_publish_mdns_prr", 00:05:46.572 "nvmf_subsystem_get_listeners", 00:05:46.572 "nvmf_subsystem_get_qpairs", 00:05:46.572 "nvmf_subsystem_get_controllers", 00:05:46.572 "nvmf_get_stats", 00:05:46.572 "nvmf_get_transports", 00:05:46.572 "nvmf_create_transport", 00:05:46.572 "nvmf_get_targets", 00:05:46.572 "nvmf_delete_target", 00:05:46.572 "nvmf_create_target", 00:05:46.572 "nvmf_subsystem_allow_any_host", 00:05:46.572 "nvmf_subsystem_remove_host", 00:05:46.572 "nvmf_subsystem_add_host", 00:05:46.572 "nvmf_ns_remove_host", 00:05:46.572 "nvmf_ns_add_host", 00:05:46.572 "nvmf_subsystem_remove_ns", 00:05:46.572 "nvmf_subsystem_add_ns", 00:05:46.572 "nvmf_subsystem_listener_set_ana_state", 00:05:46.572 "nvmf_discovery_get_referrals", 00:05:46.572 "nvmf_discovery_remove_referral", 00:05:46.572 "nvmf_discovery_add_referral", 00:05:46.572 "nvmf_subsystem_remove_listener", 00:05:46.572 "nvmf_subsystem_add_listener", 00:05:46.572 "nvmf_delete_subsystem", 00:05:46.572 "nvmf_create_subsystem", 00:05:46.572 "nvmf_get_subsystems", 00:05:46.572 "env_dpdk_get_mem_stats", 00:05:46.572 "nbd_get_disks", 00:05:46.572 "nbd_stop_disk", 00:05:46.572 "nbd_start_disk", 00:05:46.572 "ublk_recover_disk", 00:05:46.572 "ublk_get_disks", 00:05:46.572 "ublk_stop_disk", 00:05:46.572 "ublk_start_disk", 00:05:46.572 "ublk_destroy_target", 00:05:46.572 "ublk_create_target", 00:05:46.572 "virtio_blk_create_transport", 00:05:46.572 "virtio_blk_get_transports", 00:05:46.572 "vhost_controller_set_coalescing", 00:05:46.572 "vhost_get_controllers", 00:05:46.572 "vhost_delete_controller", 00:05:46.572 "vhost_create_blk_controller", 00:05:46.572 "vhost_scsi_controller_remove_target", 00:05:46.572 "vhost_scsi_controller_add_target", 00:05:46.572 "vhost_start_scsi_controller", 00:05:46.572 "vhost_create_scsi_controller", 00:05:46.572 "thread_set_cpumask", 00:05:46.572 "framework_get_scheduler", 00:05:46.572 "framework_set_scheduler", 00:05:46.572 "framework_get_reactors", 00:05:46.572 "thread_get_io_channels", 00:05:46.572 "thread_get_pollers", 00:05:46.572 "thread_get_stats", 00:05:46.572 "framework_monitor_context_switch", 00:05:46.572 "spdk_kill_instance", 00:05:46.572 "log_enable_timestamps", 00:05:46.572 "log_get_flags", 00:05:46.572 "log_clear_flag", 00:05:46.572 "log_set_flag", 00:05:46.572 "log_get_level", 00:05:46.572 "log_set_level", 00:05:46.572 "log_get_print_level", 00:05:46.572 "log_set_print_level", 00:05:46.572 "framework_enable_cpumask_locks", 00:05:46.572 "framework_disable_cpumask_locks", 00:05:46.572 "framework_wait_init", 00:05:46.572 "framework_start_init", 00:05:46.572 "scsi_get_devices", 00:05:46.572 "bdev_get_histogram", 00:05:46.572 "bdev_enable_histogram", 00:05:46.572 "bdev_set_qos_limit", 00:05:46.572 "bdev_set_qd_sampling_period", 00:05:46.572 "bdev_get_bdevs", 00:05:46.572 "bdev_reset_iostat", 00:05:46.572 "bdev_get_iostat", 00:05:46.572 "bdev_examine", 00:05:46.572 "bdev_wait_for_examine", 00:05:46.572 "bdev_set_options", 00:05:46.572 "notify_get_notifications", 00:05:46.572 "notify_get_types", 00:05:46.572 "accel_get_stats", 00:05:46.572 "accel_set_options", 00:05:46.572 "accel_set_driver", 00:05:46.572 "accel_crypto_key_destroy", 00:05:46.572 "accel_crypto_keys_get", 00:05:46.572 "accel_crypto_key_create", 00:05:46.572 "accel_assign_opc", 00:05:46.572 "accel_get_module_info", 00:05:46.572 "accel_get_opc_assignments", 00:05:46.572 "vmd_rescan", 00:05:46.572 "vmd_remove_device", 00:05:46.572 "vmd_enable", 00:05:46.572 "sock_get_default_impl", 00:05:46.572 "sock_set_default_impl", 00:05:46.572 "sock_impl_set_options", 00:05:46.572 "sock_impl_get_options", 00:05:46.572 "iobuf_get_stats", 00:05:46.572 "iobuf_set_options", 00:05:46.572 "keyring_get_keys", 00:05:46.572 "framework_get_pci_devices", 00:05:46.572 "framework_get_config", 00:05:46.572 "framework_get_subsystems", 00:05:46.572 "vfu_tgt_set_base_path", 00:05:46.572 "trace_get_info", 00:05:46.572 "trace_get_tpoint_group_mask", 00:05:46.572 "trace_disable_tpoint_group", 00:05:46.572 "trace_enable_tpoint_group", 00:05:46.572 "trace_clear_tpoint_mask", 00:05:46.572 "trace_set_tpoint_mask", 00:05:46.572 "spdk_get_version", 00:05:46.572 "rpc_get_methods" 00:05:46.572 ] 00:05:46.572 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.572 10:26:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.572 10:26:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.572 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.572 10:26:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3722984 00:05:46.572 10:26:34 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3722984 ']' 00:05:46.572 10:26:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3722984 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3722984 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3722984' 00:05:46.572 killing process with pid 3722984 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3722984 00:05:46.572 10:26:35 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3722984 00:05:46.832 00:05:46.832 real 0m1.082s 00:05:46.832 user 0m2.010s 00:05:46.832 sys 0m0.420s 00:05:46.832 10:26:35 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.832 10:26:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.832 ************************************ 00:05:46.832 END TEST spdkcli_tcp 00:05:46.832 ************************************ 00:05:46.832 10:26:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.832 10:26:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.832 10:26:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.832 10:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 ************************************ 00:05:47.091 START TEST dpdk_mem_utility 00:05:47.091 ************************************ 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.091 * Looking for test storage... 00:05:47.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.091 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.091 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3723158 00:05:47.091 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3723158 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3723158 ']' 00:05:47.091 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.091 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 [2024-07-23 10:26:35.452164] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:47.091 [2024-07-23 10:26:35.452266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723158 ] 00:05:47.091 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.091 [2024-07-23 10:26:35.512807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.350 [2024-07-23 10:26:35.600552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.350 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.350 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:47.350 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.350 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.350 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.350 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.350 { 00:05:47.350 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.350 } 00:05:47.350 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.350 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.609 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.610 1 heaps totaling size 814.000000 MiB 00:05:47.610 size: 814.000000 MiB heap id: 0 00:05:47.610 end heaps---------- 00:05:47.610 8 mempools totaling size 598.116089 MiB 00:05:47.610 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.610 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.610 size: 84.521057 MiB name: bdev_io_3723158 00:05:47.610 size: 51.011292 MiB name: evtpool_3723158 00:05:47.610 size: 50.003479 MiB name: msgpool_3723158 00:05:47.610 size: 21.763794 MiB name: PDU_Pool 00:05:47.610 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.610 size: 0.026123 MiB name: Session_Pool 00:05:47.610 end mempools------- 00:05:47.610 6 memzones totaling size 4.142822 MiB 00:05:47.610 size: 1.000366 MiB name: RG_ring_0_3723158 00:05:47.610 size: 1.000366 MiB name: RG_ring_1_3723158 00:05:47.610 size: 1.000366 MiB name: RG_ring_4_3723158 00:05:47.610 size: 1.000366 MiB name: RG_ring_5_3723158 00:05:47.610 size: 0.125366 MiB name: RG_ring_2_3723158 00:05:47.610 size: 0.015991 MiB name: RG_ring_3_3723158 00:05:47.610 end memzones------- 00:05:47.610 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.610 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:47.610 list of free elements. size: 12.519348 MiB 00:05:47.610 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.610 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.610 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.610 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.610 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.610 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.610 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.610 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.610 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:47.610 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:47.610 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:47.610 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:47.610 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.610 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:47.610 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:47.610 list of standard malloc elements. size: 199.218079 MiB 00:05:47.610 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.610 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.610 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.610 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.610 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.610 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.610 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.610 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.610 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.610 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.610 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.610 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.610 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.610 list of memzone associated elements. size: 602.262573 MiB 00:05:47.610 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.610 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.610 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.610 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.610 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.610 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3723158_0 00:05:47.610 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.610 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3723158_0 00:05:47.610 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.610 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3723158_0 00:05:47.610 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.610 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.610 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.610 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.610 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.610 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3723158 00:05:47.610 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.610 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3723158 00:05:47.610 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.610 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3723158 00:05:47.610 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.610 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.610 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.610 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.610 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.610 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.610 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.610 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3723158 00:05:47.610 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.610 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3723158 00:05:47.610 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.610 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3723158 00:05:47.610 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.610 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3723158 00:05:47.610 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.610 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3723158 00:05:47.610 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.610 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.610 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.610 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.610 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.610 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.610 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.610 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3723158 00:05:47.610 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.610 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.610 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:47.610 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.610 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.610 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3723158 00:05:47.610 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:47.610 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.610 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:47.610 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3723158 00:05:47.610 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.610 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3723158 00:05:47.610 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:47.610 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.610 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.610 10:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3723158 00:05:47.610 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3723158 ']' 00:05:47.610 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3723158 00:05:47.610 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:47.610 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3723158 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3723158' 00:05:47.611 killing process with pid 3723158 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3723158 00:05:47.611 10:26:35 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3723158 00:05:47.869 00:05:47.869 real 0m0.904s 00:05:47.869 user 0m0.939s 00:05:47.869 sys 0m0.380s 00:05:47.869 10:26:36 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.869 10:26:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.869 ************************************ 00:05:47.869 END TEST dpdk_mem_utility 00:05:47.869 ************************************ 00:05:47.869 10:26:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.869 10:26:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.869 10:26:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.869 10:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.869 ************************************ 00:05:47.869 START TEST event 00:05:47.869 ************************************ 00:05:47.869 10:26:36 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.869 * Looking for test storage... 00:05:47.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:47.870 10:26:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:47.870 10:26:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.870 10:26:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.870 10:26:36 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:47.870 10:26:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.870 10:26:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.870 ************************************ 00:05:47.870 START TEST event_perf 00:05:47.870 ************************************ 00:05:47.870 10:26:36 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.128 Running I/O for 1 seconds...[2024-07-23 10:26:36.381772] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:48.128 [2024-07-23 10:26:36.381840] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723315 ] 00:05:48.128 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.128 [2024-07-23 10:26:36.451049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.128 [2024-07-23 10:26:36.545262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.128 [2024-07-23 10:26:36.545340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.128 [2024-07-23 10:26:36.545288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.128 [2024-07-23 10:26:36.545343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.503 Running I/O for 1 seconds... 00:05:49.503 lcore 0: 228882 00:05:49.503 lcore 1: 228881 00:05:49.503 lcore 2: 228881 00:05:49.503 lcore 3: 228881 00:05:49.503 done. 00:05:49.503 00:05:49.503 real 0m1.241s 00:05:49.503 user 0m4.150s 00:05:49.503 sys 0m0.081s 00:05:49.503 10:26:37 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.503 10:26:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.503 ************************************ 00:05:49.503 END TEST event_perf 00:05:49.503 ************************************ 00:05:49.503 10:26:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.503 10:26:37 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:49.503 10:26:37 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.503 10:26:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.503 ************************************ 00:05:49.503 START TEST event_reactor 00:05:49.503 ************************************ 00:05:49.503 10:26:37 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.503 [2024-07-23 10:26:37.680313] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:49.503 [2024-07-23 10:26:37.680378] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723440 ] 00:05:49.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.503 [2024-07-23 10:26:37.739488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.503 [2024-07-23 10:26:37.830391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.437 test_start 00:05:50.437 oneshot 00:05:50.437 tick 100 00:05:50.437 tick 100 00:05:50.437 tick 250 00:05:50.437 tick 100 00:05:50.437 tick 100 00:05:50.437 tick 100 00:05:50.437 tick 250 00:05:50.437 tick 500 00:05:50.437 tick 100 00:05:50.437 tick 100 00:05:50.437 tick 250 00:05:50.437 tick 100 00:05:50.437 tick 100 00:05:50.437 test_end 00:05:50.437 00:05:50.437 real 0m1.228s 00:05:50.437 user 0m1.138s 00:05:50.437 sys 0m0.082s 00:05:50.437 10:26:38 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.437 10:26:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.437 ************************************ 00:05:50.437 END TEST event_reactor 00:05:50.437 ************************************ 00:05:50.437 10:26:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.437 10:26:38 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:50.437 10:26:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.437 10:26:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.696 ************************************ 00:05:50.696 START TEST event_reactor_perf 00:05:50.696 ************************************ 00:05:50.696 10:26:38 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.696 [2024-07-23 10:26:38.964792] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:50.696 [2024-07-23 10:26:38.964862] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723562 ] 00:05:50.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.696 [2024-07-23 10:26:39.023179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.696 [2024-07-23 10:26:39.113765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.070 test_start 00:05:52.070 test_end 00:05:52.070 Performance: 322949 events per second 00:05:52.070 00:05:52.070 real 0m1.226s 00:05:52.070 user 0m1.140s 00:05:52.070 sys 0m0.078s 00:05:52.070 10:26:40 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.070 10:26:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.070 ************************************ 00:05:52.070 END TEST event_reactor_perf 00:05:52.070 ************************************ 00:05:52.070 10:26:40 event -- event/event.sh@49 -- # uname -s 00:05:52.070 10:26:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.070 10:26:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.070 10:26:40 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.070 10:26:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.070 10:26:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.070 ************************************ 00:05:52.070 START TEST event_scheduler 00:05:52.070 ************************************ 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.070 * Looking for test storage... 00:05:52.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:52.070 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.070 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3723791 00:05:52.070 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.070 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.070 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3723791 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3723791 ']' 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.070 10:26:40 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.071 10:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.071 [2024-07-23 10:26:40.333938] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:52.071 [2024-07-23 10:26:40.334038] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723791 ] 00:05:52.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.071 [2024-07-23 10:26:40.393399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.071 [2024-07-23 10:26:40.484258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.071 [2024-07-23 10:26:40.484311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.071 [2024-07-23 10:26:40.484361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.071 [2024-07-23 10:26:40.484364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:52.329 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.329 POWER: Env isn't set yet! 00:05:52.329 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:52.329 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:52.329 POWER: Cannot get available frequencies of lcore 0 00:05:52.329 POWER: Attempting to initialise PSTAT power management... 00:05:52.329 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:52.329 POWER: Initialized successfully for lcore 0 power management 00:05:52.329 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:52.329 POWER: Initialized successfully for lcore 1 power management 00:05:52.329 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:52.329 POWER: Initialized successfully for lcore 2 power management 00:05:52.329 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:52.329 POWER: Initialized successfully for lcore 3 power management 00:05:52.329 [2024-07-23 10:26:40.628659] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.329 [2024-07-23 10:26:40.628678] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.329 [2024-07-23 10:26:40.628689] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.329 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.329 [2024-07-23 10:26:40.712507] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.329 10:26:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.329 10:26:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.330 10:26:40 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.330 10:26:40 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 ************************************ 00:05:52.330 START TEST scheduler_create_thread 00:05:52.330 ************************************ 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 2 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 3 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 4 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 5 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 6 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 7 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 8 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 9 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 10 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.330 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.589 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.589 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.589 10:26:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.589 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.589 10:26:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.847 10:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.847 10:26:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.847 10:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.847 10:26:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 10:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.760 10:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:54.760 10:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:54.760 10:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.760 10:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.694 10:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.694 00:05:55.694 real 0m3.098s 00:05:55.694 user 0m0.011s 00:05:55.694 sys 0m0.006s 00:05:55.694 10:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.694 10:26:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.694 ************************************ 00:05:55.694 END TEST scheduler_create_thread 00:05:55.694 ************************************ 00:05:55.694 10:26:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.694 10:26:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3723791 00:05:55.694 10:26:43 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3723791 ']' 00:05:55.694 10:26:43 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3723791 00:05:55.694 10:26:43 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:55.694 10:26:43 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.694 10:26:43 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3723791 00:05:55.695 10:26:43 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:55.695 10:26:43 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:55.695 10:26:43 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3723791' 00:05:55.695 killing process with pid 3723791 00:05:55.695 10:26:43 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3723791 00:05:55.695 10:26:43 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3723791 00:05:55.953 [2024-07-23 10:26:44.216683] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:55.953 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:55.953 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:55.953 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:55.953 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:55.953 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:55.953 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:55.953 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:55.953 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:55.953 00:05:55.953 real 0m4.173s 00:05:55.953 user 0m6.884s 00:05:55.953 sys 0m0.317s 00:05:55.953 10:26:44 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.953 10:26:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.953 ************************************ 00:05:55.953 END TEST event_scheduler 00:05:55.953 ************************************ 00:05:55.953 10:26:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:55.953 10:26:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:55.953 10:26:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.953 10:26:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.953 10:26:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.212 ************************************ 00:05:56.212 START TEST app_repeat 00:05:56.212 ************************************ 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3724164 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3724164' 00:05:56.212 Process app_repeat pid: 3724164 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.212 spdk_app_start Round 0 00:05:56.212 10:26:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3724164 /var/tmp/spdk-nbd.sock 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3724164 ']' 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.212 10:26:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.212 [2024-07-23 10:26:44.485720] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:56.212 [2024-07-23 10:26:44.485800] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724164 ] 00:05:56.212 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.213 [2024-07-23 10:26:44.548700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.213 [2024-07-23 10:26:44.637243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.213 [2024-07-23 10:26:44.637248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.472 10:26:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.472 10:26:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:56.472 10:26:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.730 Malloc0 00:05:56.730 10:26:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.988 Malloc1 00:05:56.988 10:26:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.988 10:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.246 /dev/nbd0 00:05:57.246 10:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.246 10:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:57.246 10:26:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.247 1+0 records in 00:05:57.247 1+0 records out 00:05:57.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165691 s, 24.7 MB/s 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:57.247 10:26:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:57.247 10:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.247 10:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.247 10:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.504 /dev/nbd1 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.762 1+0 records in 00:05:57.762 1+0 records out 00:05:57.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214037 s, 19.1 MB/s 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:57.762 10:26:46 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.762 10:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.021 { 00:05:58.021 "nbd_device": "/dev/nbd0", 00:05:58.021 "bdev_name": "Malloc0" 00:05:58.021 }, 00:05:58.021 { 00:05:58.021 "nbd_device": "/dev/nbd1", 00:05:58.021 "bdev_name": "Malloc1" 00:05:58.021 } 00:05:58.021 ]' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.021 { 00:05:58.021 "nbd_device": "/dev/nbd0", 00:05:58.021 "bdev_name": "Malloc0" 00:05:58.021 }, 00:05:58.021 { 00:05:58.021 "nbd_device": "/dev/nbd1", 00:05:58.021 "bdev_name": "Malloc1" 00:05:58.021 } 00:05:58.021 ]' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.021 /dev/nbd1' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.021 /dev/nbd1' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.021 256+0 records in 00:05:58.021 256+0 records out 00:05:58.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586899 s, 179 MB/s 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.021 256+0 records in 00:05:58.021 256+0 records out 00:05:58.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253627 s, 41.3 MB/s 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.021 256+0 records in 00:05:58.021 256+0 records out 00:05:58.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286303 s, 36.6 MB/s 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.021 10:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.279 10:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.844 10:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.103 10:26:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.103 10:26:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.361 10:26:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.659 [2024-07-23 10:26:47.902544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.660 [2024-07-23 10:26:47.992048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.660 [2024-07-23 10:26:47.992075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.660 [2024-07-23 10:26:48.043112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.660 [2024-07-23 10:26:48.043219] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.960 10:26:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.961 10:26:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.961 spdk_app_start Round 1 00:06:02.961 10:26:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3724164 /var/tmp/spdk-nbd.sock 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3724164 ']' 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.961 10:26:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 10:26:51 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.961 10:26:51 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:02.961 10:26:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.961 Malloc0 00:06:02.961 10:26:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.218 Malloc1 00:06:03.219 10:26:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.219 10:26:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.477 /dev/nbd0 00:06:03.735 10:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.735 10:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:03.735 10:26:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.735 1+0 records in 00:06:03.735 1+0 records out 00:06:03.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190445 s, 21.5 MB/s 00:06:03.735 10:26:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.735 10:26:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:03.735 10:26:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.735 10:26:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:03.735 10:26:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:03.735 10:26:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.735 10:26:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.735 10:26:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.993 /dev/nbd1 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.993 1+0 records in 00:06:03.993 1+0 records out 00:06:03.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201911 s, 20.3 MB/s 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:03.993 10:26:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.993 10:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.251 10:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.251 { 00:06:04.251 "nbd_device": "/dev/nbd0", 00:06:04.251 "bdev_name": "Malloc0" 00:06:04.251 }, 00:06:04.251 { 00:06:04.251 "nbd_device": "/dev/nbd1", 00:06:04.251 "bdev_name": "Malloc1" 00:06:04.251 } 00:06:04.251 ]' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.252 { 00:06:04.252 "nbd_device": "/dev/nbd0", 00:06:04.252 "bdev_name": "Malloc0" 00:06:04.252 }, 00:06:04.252 { 00:06:04.252 "nbd_device": "/dev/nbd1", 00:06:04.252 "bdev_name": "Malloc1" 00:06:04.252 } 00:06:04.252 ]' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.252 /dev/nbd1' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.252 /dev/nbd1' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.252 256+0 records in 00:06:04.252 256+0 records out 00:06:04.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516303 s, 203 MB/s 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.252 256+0 records in 00:06:04.252 256+0 records out 00:06:04.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244959 s, 42.8 MB/s 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.252 256+0 records in 00:06:04.252 256+0 records out 00:06:04.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264961 s, 39.6 MB/s 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.252 10:26:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.825 10:26:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.083 10:26:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.340 10:26:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.340 10:26:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.599 10:26:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.599 [2024-07-23 10:26:54.057865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.857 [2024-07-23 10:26:54.147835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.857 [2024-07-23 10:26:54.147838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.857 [2024-07-23 10:26:54.199235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.857 [2024-07-23 10:26:54.199315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.136 10:26:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.136 10:26:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:09.136 spdk_app_start Round 2 00:06:09.136 10:26:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3724164 /var/tmp/spdk-nbd.sock 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3724164 ']' 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.136 10:26:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.136 10:26:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.136 10:26:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:09.136 10:26:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.136 Malloc0 00:06:09.136 10:26:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.394 Malloc1 00:06:09.394 10:26:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.394 10:26:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.652 /dev/nbd0 00:06:09.652 10:26:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.652 10:26:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.652 1+0 records in 00:06:09.652 1+0 records out 00:06:09.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169689 s, 24.1 MB/s 00:06:09.652 10:26:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.909 10:26:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:09.909 10:26:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.909 10:26:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:09.909 10:26:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:09.909 10:26:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.909 10:26:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.909 10:26:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.167 /dev/nbd1 00:06:10.167 10:26:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.167 10:26:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.167 1+0 records in 00:06:10.167 1+0 records out 00:06:10.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216402 s, 18.9 MB/s 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.167 10:26:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:10.168 10:26:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.168 10:26:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:10.168 10:26:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:10.168 10:26:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.168 10:26:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.168 10:26:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.168 10:26:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.168 10:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.425 { 00:06:10.425 "nbd_device": "/dev/nbd0", 00:06:10.425 "bdev_name": "Malloc0" 00:06:10.425 }, 00:06:10.425 { 00:06:10.425 "nbd_device": "/dev/nbd1", 00:06:10.425 "bdev_name": "Malloc1" 00:06:10.425 } 00:06:10.425 ]' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.425 { 00:06:10.425 "nbd_device": "/dev/nbd0", 00:06:10.425 "bdev_name": "Malloc0" 00:06:10.425 }, 00:06:10.425 { 00:06:10.425 "nbd_device": "/dev/nbd1", 00:06:10.425 "bdev_name": "Malloc1" 00:06:10.425 } 00:06:10.425 ]' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.425 /dev/nbd1' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.425 /dev/nbd1' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.425 256+0 records in 00:06:10.425 256+0 records out 00:06:10.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593717 s, 177 MB/s 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.425 256+0 records in 00:06:10.425 256+0 records out 00:06:10.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253036 s, 41.4 MB/s 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.425 256+0 records in 00:06:10.425 256+0 records out 00:06:10.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264589 s, 39.6 MB/s 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.425 10:26:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.426 10:26:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.992 10:26:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.992 10:26:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.993 10:26:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.251 10:26:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.509 10:26:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.509 10:26:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.766 10:27:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.024 [2024-07-23 10:27:00.337760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.024 [2024-07-23 10:27:00.428177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.024 [2024-07-23 10:27:00.428180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.024 [2024-07-23 10:27:00.477889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.024 [2024-07-23 10:27:00.477967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.302 10:27:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3724164 /var/tmp/spdk-nbd.sock 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3724164 ']' 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:15.302 10:27:03 event.app_repeat -- event/event.sh@39 -- # killprocess 3724164 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3724164 ']' 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3724164 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3724164 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3724164' 00:06:15.302 killing process with pid 3724164 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3724164 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3724164 00:06:15.302 spdk_app_start is called in Round 0. 00:06:15.302 Shutdown signal received, stop current app iteration 00:06:15.302 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:15.302 spdk_app_start is called in Round 1. 00:06:15.302 Shutdown signal received, stop current app iteration 00:06:15.302 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:15.302 spdk_app_start is called in Round 2. 00:06:15.302 Shutdown signal received, stop current app iteration 00:06:15.302 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:15.302 spdk_app_start is called in Round 3. 00:06:15.302 Shutdown signal received, stop current app iteration 00:06:15.302 10:27:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.302 10:27:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.302 00:06:15.302 real 0m19.203s 00:06:15.302 user 0m42.782s 00:06:15.302 sys 0m3.502s 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.302 10:27:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.302 ************************************ 00:06:15.302 END TEST app_repeat 00:06:15.302 ************************************ 00:06:15.302 10:27:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.302 10:27:03 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.302 10:27:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.302 10:27:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.302 10:27:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.302 ************************************ 00:06:15.302 START TEST cpu_locks 00:06:15.302 ************************************ 00:06:15.302 10:27:03 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.302 * Looking for test storage... 00:06:15.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.302 10:27:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.302 10:27:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.302 10:27:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.302 10:27:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.302 10:27:03 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.302 10:27:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.302 10:27:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.302 ************************************ 00:06:15.302 START TEST default_locks 00:06:15.302 ************************************ 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3726303 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3726303 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3726303 ']' 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.302 10:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.560 [2024-07-23 10:27:03.849625] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:15.560 [2024-07-23 10:27:03.849727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726303 ] 00:06:15.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.560 [2024-07-23 10:27:03.908755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.560 [2024-07-23 10:27:03.996284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.819 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.819 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:15.819 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3726303 00:06:15.819 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.819 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3726303 00:06:16.078 lslocks: write error 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3726303 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3726303 ']' 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3726303 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726303 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726303' 00:06:16.078 killing process with pid 3726303 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3726303 00:06:16.078 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3726303 00:06:16.338 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3726303 00:06:16.338 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:16.338 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3726303 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3726303 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3726303 ']' 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3726303) - No such process 00:06:16.339 ERROR: process (pid: 3726303) is no longer running 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.339 00:06:16.339 real 0m1.041s 00:06:16.339 user 0m1.041s 00:06:16.339 sys 0m0.525s 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.339 10:27:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.339 ************************************ 00:06:16.339 END TEST default_locks 00:06:16.339 ************************************ 00:06:16.598 10:27:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.598 10:27:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.598 10:27:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.598 10:27:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.598 ************************************ 00:06:16.598 START TEST default_locks_via_rpc 00:06:16.598 ************************************ 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3726433 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3726433 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3726433 ']' 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.598 10:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.598 [2024-07-23 10:27:04.948415] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:16.598 [2024-07-23 10:27:04.948522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726433 ] 00:06:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.598 [2024-07-23 10:27:05.008101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.598 [2024-07-23 10:27:05.095752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3726433 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3726433 00:06:16.857 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3726433 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3726433 ']' 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3726433 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726433 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726433' 00:06:17.422 killing process with pid 3726433 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3726433 00:06:17.422 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3726433 00:06:17.681 00:06:17.681 real 0m1.100s 00:06:17.681 user 0m1.128s 00:06:17.681 sys 0m0.524s 00:06:17.681 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.681 10:27:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.681 ************************************ 00:06:17.681 END TEST default_locks_via_rpc 00:06:17.681 ************************************ 00:06:17.681 10:27:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.681 10:27:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.681 10:27:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.681 10:27:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.681 ************************************ 00:06:17.682 START TEST non_locking_app_on_locked_coremask 00:06:17.682 ************************************ 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3726563 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3726563 /var/tmp/spdk.sock 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3726563 ']' 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.682 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.682 [2024-07-23 10:27:06.104694] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:17.682 [2024-07-23 10:27:06.104791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726563 ] 00:06:17.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.682 [2024-07-23 10:27:06.168015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.941 [2024-07-23 10:27:06.259503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3726566 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3726566 /var/tmp/spdk2.sock 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3726566 ']' 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.199 10:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.199 [2024-07-23 10:27:06.537436] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:18.199 [2024-07-23 10:27:06.537532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726566 ] 00:06:18.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.199 [2024-07-23 10:27:06.630187] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.199 [2024-07-23 10:27:06.630239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.458 [2024-07-23 10:27:06.814679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.396 10:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.396 10:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:19.396 10:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3726563 00:06:19.396 10:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3726563 00:06:19.396 10:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.655 lslocks: write error 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3726563 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3726563 ']' 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3726563 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726563 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726563' 00:06:19.655 killing process with pid 3726563 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3726563 00:06:19.655 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3726563 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3726566 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3726566 ']' 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3726566 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726566 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726566' 00:06:20.223 killing process with pid 3726566 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3726566 00:06:20.223 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3726566 00:06:20.482 00:06:20.482 real 0m2.900s 00:06:20.482 user 0m3.243s 00:06:20.482 sys 0m1.046s 00:06:20.482 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.482 10:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.482 ************************************ 00:06:20.482 END TEST non_locking_app_on_locked_coremask 00:06:20.482 ************************************ 00:06:20.482 10:27:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.482 10:27:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.482 10:27:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.482 10:27:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.095 ************************************ 00:06:21.095 START TEST locking_app_on_unlocked_coremask 00:06:21.095 ************************************ 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3727194 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3727194 /var/tmp/spdk.sock 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727194 ']' 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.095 [2024-07-23 10:27:09.060981] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:21.095 [2024-07-23 10:27:09.061079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727194 ] 00:06:21.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.095 [2024-07-23 10:27:09.120848] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.095 [2024-07-23 10:27:09.120899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.095 [2024-07-23 10:27:09.208253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3727305 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3727305 /var/tmp/spdk2.sock 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727305 ']' 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.095 10:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.095 [2024-07-23 10:27:09.476291] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:21.095 [2024-07-23 10:27:09.476388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727305 ] 00:06:21.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.095 [2024-07-23 10:27:09.566233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.353 [2024-07-23 10:27:09.742614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.287 10:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.287 10:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:22.287 10:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3727305 00:06:22.287 10:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3727305 00:06:22.287 10:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.855 lslocks: write error 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3727194 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3727194 ']' 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3727194 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727194 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727194' 00:06:22.855 killing process with pid 3727194 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3727194 00:06:22.855 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3727194 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3727305 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3727305 ']' 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3727305 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727305 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727305' 00:06:23.425 killing process with pid 3727305 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3727305 00:06:23.425 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3727305 00:06:23.685 00:06:23.685 real 0m2.944s 00:06:23.685 user 0m3.319s 00:06:23.685 sys 0m1.023s 00:06:23.685 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.685 10:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 ************************************ 00:06:23.685 END TEST locking_app_on_unlocked_coremask 00:06:23.685 ************************************ 00:06:23.685 10:27:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.685 10:27:11 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.685 10:27:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.685 10:27:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 ************************************ 00:06:23.685 START TEST locking_app_on_locked_coremask 00:06:23.685 ************************************ 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3727667 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3727667 /var/tmp/spdk.sock 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727667 ']' 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.685 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 [2024-07-23 10:27:12.064658] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:23.685 [2024-07-23 10:27:12.064757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727667 ] 00:06:23.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.685 [2024-07-23 10:27:12.124572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.943 [2024-07-23 10:27:12.212445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3727753 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3727753 /var/tmp/spdk2.sock 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3727753 /var/tmp/spdk2.sock 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3727753 /var/tmp/spdk2.sock 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727753 ']' 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.943 10:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.204 [2024-07-23 10:27:12.481288] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:24.204 [2024-07-23 10:27:12.481379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727753 ] 00:06:24.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.204 [2024-07-23 10:27:12.573878] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3727667 has claimed it. 00:06:24.204 [2024-07-23 10:27:12.573948] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3727753) - No such process 00:06:24.771 ERROR: process (pid: 3727753) is no longer running 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.771 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3727667 00:06:24.772 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3727667 00:06:24.772 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.339 lslocks: write error 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3727667 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3727667 ']' 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3727667 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727667 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727667' 00:06:25.339 killing process with pid 3727667 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3727667 00:06:25.339 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3727667 00:06:25.600 00:06:25.600 real 0m1.895s 00:06:25.600 user 0m2.186s 00:06:25.600 sys 0m0.608s 00:06:25.600 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.600 10:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.600 ************************************ 00:06:25.600 END TEST locking_app_on_locked_coremask 00:06:25.600 ************************************ 00:06:25.600 10:27:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.600 10:27:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.600 10:27:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.600 10:27:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.600 ************************************ 00:06:25.600 START TEST locking_overlapped_coremask 00:06:25.600 ************************************ 00:06:25.600 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:25.600 10:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3727891 00:06:25.600 10:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3727891 /var/tmp/spdk.sock 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727891 ']' 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.601 10:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.601 [2024-07-23 10:27:14.018239] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:25.601 [2024-07-23 10:27:14.018332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727891 ] 00:06:25.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.601 [2024-07-23 10:27:14.079599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.860 [2024-07-23 10:27:14.171816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.860 [2024-07-23 10:27:14.171907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.860 [2024-07-23 10:27:14.171871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3727984 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3727984 /var/tmp/spdk2.sock 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3727984 /var/tmp/spdk2.sock 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3727984 /var/tmp/spdk2.sock 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3727984 ']' 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.118 10:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.118 [2024-07-23 10:27:14.453149] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:26.118 [2024-07-23 10:27:14.453248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727984 ] 00:06:26.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.118 [2024-07-23 10:27:14.543842] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3727891 has claimed it. 00:06:26.118 [2024-07-23 10:27:14.543911] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3727984) - No such process 00:06:26.683 ERROR: process (pid: 3727984) is no longer running 00:06:26.683 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3727891 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3727891 ']' 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3727891 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3727891 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3727891' 00:06:26.942 killing process with pid 3727891 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3727891 00:06:26.942 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3727891 00:06:27.200 00:06:27.200 real 0m1.524s 00:06:27.200 user 0m4.254s 00:06:27.200 sys 0m0.436s 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 ************************************ 00:06:27.200 END TEST locking_overlapped_coremask 00:06:27.200 ************************************ 00:06:27.200 10:27:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.200 10:27:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.200 10:27:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.200 10:27:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 ************************************ 00:06:27.200 START TEST locking_overlapped_coremask_via_rpc 00:06:27.200 ************************************ 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3728116 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3728116 /var/tmp/spdk.sock 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3728116 ']' 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.200 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 [2024-07-23 10:27:15.600812] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:27.200 [2024-07-23 10:27:15.600906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728116 ] 00:06:27.200 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.201 [2024-07-23 10:27:15.664135] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.201 [2024-07-23 10:27:15.664187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.495 [2024-07-23 10:27:15.758505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.495 [2024-07-23 10:27:15.758595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.495 [2024-07-23 10:27:15.758628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3728132 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3728132 /var/tmp/spdk2.sock 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3728132 ']' 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.495 10:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.759 [2024-07-23 10:27:16.043729] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:27.759 [2024-07-23 10:27:16.043830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728132 ] 00:06:27.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.759 [2024-07-23 10:27:16.136096] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.759 [2024-07-23 10:27:16.136141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.063 [2024-07-23 10:27:16.318504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.063 [2024-07-23 10:27:16.318534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.063 [2024-07-23 10:27:16.318536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.653 [2024-07-23 10:27:17.087588] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3728116 has claimed it. 00:06:28.653 request: 00:06:28.653 { 00:06:28.653 "method": "framework_enable_cpumask_locks", 00:06:28.653 "req_id": 1 00:06:28.653 } 00:06:28.653 Got JSON-RPC error response 00:06:28.653 response: 00:06:28.653 { 00:06:28.653 "code": -32603, 00:06:28.653 "message": "Failed to claim CPU core: 2" 00:06:28.653 } 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3728116 /var/tmp/spdk.sock 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3728116 ']' 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.653 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3728132 /var/tmp/spdk2.sock 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3728132 ']' 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.912 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.478 00:06:29.478 real 0m2.160s 00:06:29.478 user 0m1.233s 00:06:29.478 sys 0m0.210s 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.478 10:27:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.478 ************************************ 00:06:29.478 END TEST locking_overlapped_coremask_via_rpc 00:06:29.478 ************************************ 00:06:29.478 10:27:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:29.478 10:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3728116 ]] 00:06:29.478 10:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3728116 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3728116 ']' 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3728116 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3728116 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3728116' 00:06:29.478 killing process with pid 3728116 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3728116 00:06:29.478 10:27:17 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3728116 00:06:29.738 10:27:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3728132 ]] 00:06:29.738 10:27:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3728132 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3728132 ']' 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3728132 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3728132 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3728132' 00:06:29.738 killing process with pid 3728132 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3728132 00:06:29.738 10:27:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3728132 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3728116 ]] 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3728116 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3728116 ']' 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3728116 00:06:29.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3728116) - No such process 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3728116 is not found' 00:06:29.998 Process with pid 3728116 is not found 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3728132 ]] 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3728132 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3728132 ']' 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3728132 00:06:29.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3728132) - No such process 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3728132 is not found' 00:06:29.998 Process with pid 3728132 is not found 00:06:29.998 10:27:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.998 00:06:29.998 real 0m14.619s 00:06:29.998 user 0m27.141s 00:06:29.998 sys 0m5.256s 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.998 10:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.998 ************************************ 00:06:29.998 END TEST cpu_locks 00:06:29.998 ************************************ 00:06:29.998 00:06:29.998 real 0m42.071s 00:06:29.998 user 1m23.383s 00:06:29.998 sys 0m9.568s 00:06:29.998 10:27:18 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.998 10:27:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.998 ************************************ 00:06:29.998 END TEST event 00:06:29.998 ************************************ 00:06:29.998 10:27:18 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.998 10:27:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.998 10:27:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.998 10:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.998 ************************************ 00:06:29.998 START TEST thread 00:06:29.998 ************************************ 00:06:29.998 10:27:18 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.998 * Looking for test storage... 00:06:29.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:29.999 10:27:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:29.999 10:27:18 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:29.999 10:27:18 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.999 10:27:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.999 ************************************ 00:06:29.999 START TEST thread_poller_perf 00:06:29.999 ************************************ 00:06:29.999 10:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.258 [2024-07-23 10:27:18.505891] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.258 [2024-07-23 10:27:18.505961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728434 ] 00:06:30.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.258 [2024-07-23 10:27:18.563994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.258 [2024-07-23 10:27:18.651263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.258 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.633 ====================================== 00:06:31.633 busy:2718444744 (cyc) 00:06:31.633 total_run_count: 262000 00:06:31.633 tsc_hz: 2700000000 (cyc) 00:06:31.633 ====================================== 00:06:31.633 poller_cost: 10375 (cyc), 3842 (nsec) 00:06:31.633 00:06:31.633 real 0m1.233s 00:06:31.633 user 0m1.147s 00:06:31.633 sys 0m0.080s 00:06:31.633 10:27:19 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.633 10:27:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.633 ************************************ 00:06:31.633 END TEST thread_poller_perf 00:06:31.633 ************************************ 00:06:31.633 10:27:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.633 10:27:19 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:31.633 10:27:19 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.633 10:27:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.633 ************************************ 00:06:31.633 START TEST thread_poller_perf 00:06:31.633 ************************************ 00:06:31.633 10:27:19 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.633 [2024-07-23 10:27:19.795225] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:31.633 [2024-07-23 10:27:19.795303] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728605 ] 00:06:31.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.633 [2024-07-23 10:27:19.857710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.633 [2024-07-23 10:27:19.947665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.633 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.566 ====================================== 00:06:32.566 busy:2702988732 (cyc) 00:06:32.566 total_run_count: 3596000 00:06:32.566 tsc_hz: 2700000000 (cyc) 00:06:32.566 ====================================== 00:06:32.566 poller_cost: 751 (cyc), 278 (nsec) 00:06:32.566 00:06:32.566 real 0m1.234s 00:06:32.566 user 0m1.139s 00:06:32.566 sys 0m0.087s 00:06:32.566 10:27:21 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.566 10:27:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.566 ************************************ 00:06:32.566 END TEST thread_poller_perf 00:06:32.566 ************************************ 00:06:32.566 10:27:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:32.566 00:06:32.566 real 0m2.626s 00:06:32.566 user 0m2.352s 00:06:32.566 sys 0m0.269s 00:06:32.566 10:27:21 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.566 10:27:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.566 ************************************ 00:06:32.566 END TEST thread 00:06:32.566 ************************************ 00:06:32.566 10:27:21 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.566 10:27:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.566 10:27:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.566 10:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.824 ************************************ 00:06:32.824 START TEST accel 00:06:32.824 ************************************ 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.824 * Looking for test storage... 00:06:32.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.824 10:27:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:32.824 10:27:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:32.824 10:27:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.824 10:27:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3728809 00:06:32.824 10:27:21 accel -- accel/accel.sh@63 -- # waitforlisten 3728809 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@827 -- # '[' -z 3728809 ']' 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.824 10:27:21 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.824 10:27:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.824 10:27:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.824 10:27:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.824 10:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.824 10:27:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.824 10:27:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.825 10:27:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.825 10:27:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.825 10:27:21 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.825 [2024-07-23 10:27:21.196867] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:32.825 [2024-07-23 10:27:21.196963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728809 ] 00:06:32.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.825 [2024-07-23 10:27:21.258279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.083 [2024-07-23 10:27:21.350446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.083 10:27:21 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.083 10:27:21 accel -- common/autotest_common.sh@860 -- # return 0 00:06:33.083 10:27:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:33.083 10:27:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:33.083 10:27:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:33.083 10:27:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:33.083 10:27:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:33.083 10:27:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:33.083 10:27:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:33.083 10:27:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.083 10:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.083 10:27:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.342 10:27:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.342 10:27:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.342 10:27:21 accel -- accel/accel.sh@75 -- # killprocess 3728809 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@946 -- # '[' -z 3728809 ']' 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@950 -- # kill -0 3728809 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@951 -- # uname 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3728809 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3728809' 00:06:33.342 killing process with pid 3728809 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@965 -- # kill 3728809 00:06:33.342 10:27:21 accel -- common/autotest_common.sh@970 -- # wait 3728809 00:06:33.601 10:27:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:33.601 10:27:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.601 10:27:21 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:33.601 10:27:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:33.601 10:27:21 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.601 10:27:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:33.601 10:27:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.601 10:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.601 ************************************ 00:06:33.601 START TEST accel_missing_filename 00:06:33.601 ************************************ 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.601 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:33.601 10:27:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:33.601 [2024-07-23 10:27:22.032711] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:33.601 [2024-07-23 10:27:22.032790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728933 ] 00:06:33.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.601 [2024-07-23 10:27:22.095167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.859 [2024-07-23 10:27:22.186603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.859 [2024-07-23 10:27:22.235593] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.859 [2024-07-23 10:27:22.285270] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:33.859 A filename is required. 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.859 00:06:33.859 real 0m0.334s 00:06:33.859 user 0m0.242s 00:06:33.859 sys 0m0.128s 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.859 10:27:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:33.859 ************************************ 00:06:33.859 END TEST accel_missing_filename 00:06:33.859 ************************************ 00:06:34.118 10:27:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.118 10:27:22 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:34.118 10:27:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.118 10:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.118 ************************************ 00:06:34.118 START TEST accel_compress_verify 00:06:34.118 ************************************ 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.118 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:34.118 10:27:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:34.118 [2024-07-23 10:27:22.411604] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.118 [2024-07-23 10:27:22.411681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728971 ] 00:06:34.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.118 [2024-07-23 10:27:22.470034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.118 [2024-07-23 10:27:22.561504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.118 [2024-07-23 10:27:22.613309] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.376 [2024-07-23 10:27:22.663121] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:34.376 00:06:34.376 Compression does not support the verify option, aborting. 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.376 00:06:34.376 real 0m0.334s 00:06:34.376 user 0m0.240s 00:06:34.376 sys 0m0.128s 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.376 10:27:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 ************************************ 00:06:34.376 END TEST accel_compress_verify 00:06:34.376 ************************************ 00:06:34.376 10:27:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:34.376 10:27:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:34.376 10:27:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.376 10:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 ************************************ 00:06:34.376 START TEST accel_wrong_workload 00:06:34.376 ************************************ 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.376 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:34.376 10:27:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:34.376 Unsupported workload type: foobar 00:06:34.376 [2024-07-23 10:27:22.801257] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:34.376 accel_perf options: 00:06:34.376 [-h help message] 00:06:34.376 [-q queue depth per core] 00:06:34.377 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.377 [-T number of threads per core 00:06:34.377 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.377 [-t time in seconds] 00:06:34.377 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.377 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.377 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.377 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.377 [-S for crc32c workload, use this seed value (default 0) 00:06:34.377 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.377 [-f for fill workload, use this BYTE value (default 255) 00:06:34.377 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.377 [-y verify result if this switch is on] 00:06:34.377 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.377 Can be used to spread operations across a wider range of memory. 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.377 00:06:34.377 real 0m0.023s 00:06:34.377 user 0m0.014s 00:06:34.377 sys 0m0.010s 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.377 10:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 ************************************ 00:06:34.377 END TEST accel_wrong_workload 00:06:34.377 ************************************ 00:06:34.377 Error: writing output failed: Broken pipe 00:06:34.377 10:27:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.377 10:27:22 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:34.377 10:27:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.377 10:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 ************************************ 00:06:34.377 START TEST accel_negative_buffers 00:06:34.377 ************************************ 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:34.377 10:27:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:34.377 -x option must be non-negative. 00:06:34.377 [2024-07-23 10:27:22.872516] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:34.377 accel_perf options: 00:06:34.377 [-h help message] 00:06:34.377 [-q queue depth per core] 00:06:34.377 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.377 [-T number of threads per core 00:06:34.377 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.377 [-t time in seconds] 00:06:34.377 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.377 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.377 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.377 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.377 [-S for crc32c workload, use this seed value (default 0) 00:06:34.377 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.377 [-f for fill workload, use this BYTE value (default 255) 00:06:34.377 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.377 [-y verify result if this switch is on] 00:06:34.377 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.377 Can be used to spread operations across a wider range of memory. 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.377 00:06:34.377 real 0m0.024s 00:06:34.377 user 0m0.016s 00:06:34.377 sys 0m0.008s 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.377 10:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 ************************************ 00:06:34.377 END TEST accel_negative_buffers 00:06:34.377 ************************************ 00:06:34.636 Error: writing output failed: Broken pipe 00:06:34.636 10:27:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:34.636 10:27:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:34.636 10:27:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.636 10:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.636 ************************************ 00:06:34.636 START TEST accel_crc32c 00:06:34.636 ************************************ 00:06:34.636 10:27:22 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.636 10:27:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:34.636 [2024-07-23 10:27:22.940449] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.636 [2024-07-23 10:27:22.940524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729037 ] 00:06:34.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.636 [2024-07-23 10:27:23.000836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.636 [2024-07-23 10:27:23.091627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.894 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.895 10:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:35.829 10:27:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.829 00:06:35.829 real 0m1.337s 00:06:35.829 user 0m1.211s 00:06:35.829 sys 0m0.127s 00:06:35.829 10:27:24 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.829 10:27:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 ************************************ 00:06:35.829 END TEST accel_crc32c 00:06:35.829 ************************************ 00:06:35.829 10:27:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:35.829 10:27:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:35.829 10:27:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.829 10:27:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 ************************************ 00:06:35.829 START TEST accel_crc32c_C2 00:06:35.829 ************************************ 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.829 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:35.829 [2024-07-23 10:27:24.331299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:35.829 [2024-07-23 10:27:24.331370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729241 ] 00:06:36.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.088 [2024-07-23 10:27:24.390306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.088 [2024-07-23 10:27:24.481712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.088 10:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.466 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.467 00:06:37.467 real 0m1.340s 00:06:37.467 user 0m1.214s 00:06:37.467 sys 0m0.126s 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.467 10:27:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:37.467 ************************************ 00:06:37.467 END TEST accel_crc32c_C2 00:06:37.467 ************************************ 00:06:37.467 10:27:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:37.467 10:27:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.467 10:27:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.467 10:27:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.467 ************************************ 00:06:37.467 START TEST accel_copy 00:06:37.467 ************************************ 00:06:37.467 10:27:25 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:37.467 [2024-07-23 10:27:25.719631] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:37.467 [2024-07-23 10:27:25.719702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729366 ] 00:06:37.467 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.467 [2024-07-23 10:27:25.778053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.467 [2024-07-23 10:27:25.869692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.467 10:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:38.845 10:27:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.845 00:06:38.845 real 0m1.338s 00:06:38.845 user 0m1.210s 00:06:38.845 sys 0m0.128s 00:06:38.845 10:27:27 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.845 10:27:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:38.845 ************************************ 00:06:38.845 END TEST accel_copy 00:06:38.845 ************************************ 00:06:38.845 10:27:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.845 10:27:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:38.845 10:27:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.845 10:27:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.845 ************************************ 00:06:38.845 START TEST accel_fill 00:06:38.845 ************************************ 00:06:38.845 10:27:27 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:38.845 [2024-07-23 10:27:27.115880] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:38.845 [2024-07-23 10:27:27.115950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729497 ] 00:06:38.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.845 [2024-07-23 10:27:27.176298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.845 [2024-07-23 10:27:27.267638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 10:27:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:40.221 10:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.221 00:06:40.221 real 0m1.343s 00:06:40.221 user 0m1.219s 00:06:40.221 sys 0m0.125s 00:06:40.221 10:27:28 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.221 10:27:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:40.221 ************************************ 00:06:40.221 END TEST accel_fill 00:06:40.221 ************************************ 00:06:40.221 10:27:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:40.221 10:27:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:40.221 10:27:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.221 10:27:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.221 ************************************ 00:06:40.221 START TEST accel_copy_crc32c 00:06:40.221 ************************************ 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:40.221 [2024-07-23 10:27:28.514793] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:40.221 [2024-07-23 10:27:28.514866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729616 ] 00:06:40.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.221 [2024-07-23 10:27:28.573388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.221 [2024-07-23 10:27:28.664320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.221 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 10:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.597 00:06:41.597 real 0m1.339s 00:06:41.597 user 0m1.212s 00:06:41.597 sys 0m0.129s 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.597 10:27:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 ************************************ 00:06:41.597 END TEST accel_copy_crc32c 00:06:41.597 ************************************ 00:06:41.597 10:27:29 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.597 10:27:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:41.597 10:27:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.597 10:27:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.597 ************************************ 00:06:41.597 START TEST accel_copy_crc32c_C2 00:06:41.597 ************************************ 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.597 10:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:41.597 [2024-07-23 10:27:29.908470] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:41.597 [2024-07-23 10:27:29.908553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729820 ] 00:06:41.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.597 [2024-07-23 10:27:29.967830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.597 [2024-07-23 10:27:30.060062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.856 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.857 10:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.793 00:06:42.793 real 0m1.343s 00:06:42.793 user 0m1.213s 00:06:42.793 sys 0m0.130s 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.793 10:27:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:42.793 ************************************ 00:06:42.793 END TEST accel_copy_crc32c_C2 00:06:42.793 ************************************ 00:06:42.793 10:27:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:42.793 10:27:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.793 10:27:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.793 10:27:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.793 ************************************ 00:06:42.793 START TEST accel_dualcast 00:06:42.793 ************************************ 00:06:42.793 10:27:31 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:42.793 10:27:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:42.794 10:27:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:43.053 [2024-07-23 10:27:31.298764] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:43.053 [2024-07-23 10:27:31.298833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729949 ] 00:06:43.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.053 [2024-07-23 10:27:31.358354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.053 [2024-07-23 10:27:31.449561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.053 10:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:44.430 10:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.430 00:06:44.430 real 0m1.341s 00:06:44.430 user 0m1.217s 00:06:44.430 sys 0m0.124s 00:06:44.430 10:27:32 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.430 10:27:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:44.430 ************************************ 00:06:44.430 END TEST accel_dualcast 00:06:44.430 ************************************ 00:06:44.430 10:27:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:44.430 10:27:32 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.430 10:27:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.430 10:27:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.430 ************************************ 00:06:44.430 START TEST accel_compare 00:06:44.430 ************************************ 00:06:44.430 10:27:32 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:44.430 10:27:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:44.431 [2024-07-23 10:27:32.690955] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:44.431 [2024-07-23 10:27:32.691024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730071 ] 00:06:44.431 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.431 [2024-07-23 10:27:32.750638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.431 [2024-07-23 10:27:32.842057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.431 10:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:45.810 10:27:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.810 00:06:45.810 real 0m1.340s 00:06:45.810 user 0m1.212s 00:06:45.810 sys 0m0.130s 00:06:45.810 10:27:34 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.810 10:27:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:45.810 ************************************ 00:06:45.810 END TEST accel_compare 00:06:45.810 ************************************ 00:06:45.810 10:27:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:45.810 10:27:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:45.810 10:27:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.810 10:27:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.810 ************************************ 00:06:45.810 START TEST accel_xor 00:06:45.810 ************************************ 00:06:45.810 10:27:34 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:45.810 [2024-07-23 10:27:34.085809] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:45.810 [2024-07-23 10:27:34.085881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730201 ] 00:06:45.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.810 [2024-07-23 10:27:34.145231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.810 [2024-07-23 10:27:34.235919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:45.810 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.811 10:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.186 00:06:47.186 real 0m1.343s 00:06:47.186 user 0m1.214s 00:06:47.186 sys 0m0.131s 00:06:47.186 10:27:35 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.186 10:27:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:47.186 ************************************ 00:06:47.186 END TEST accel_xor 00:06:47.186 ************************************ 00:06:47.186 10:27:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:47.186 10:27:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:47.186 10:27:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.186 10:27:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.186 ************************************ 00:06:47.186 START TEST accel_xor 00:06:47.186 ************************************ 00:06:47.186 10:27:35 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:47.186 [2024-07-23 10:27:35.482448] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:47.186 [2024-07-23 10:27:35.482529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730403 ] 00:06:47.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.186 [2024-07-23 10:27:35.540993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.186 [2024-07-23 10:27:35.632118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.186 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.187 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.445 10:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:48.379 10:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.379 00:06:48.379 real 0m1.339s 00:06:48.379 user 0m1.209s 00:06:48.379 sys 0m0.130s 00:06:48.379 10:27:36 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.379 10:27:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:48.379 ************************************ 00:06:48.379 END TEST accel_xor 00:06:48.379 ************************************ 00:06:48.379 10:27:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:48.379 10:27:36 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:48.379 10:27:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.379 10:27:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.379 ************************************ 00:06:48.379 START TEST accel_dif_verify 00:06:48.379 ************************************ 00:06:48.379 10:27:36 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.379 10:27:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.379 [2024-07-23 10:27:36.869126] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.379 [2024-07-23 10:27:36.869196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730528 ] 00:06:48.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.638 [2024-07-23 10:27:36.927827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.638 [2024-07-23 10:27:37.019129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.638 10:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:50.013 10:27:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.013 00:06:50.013 real 0m1.342s 00:06:50.013 user 0m1.217s 00:06:50.013 sys 0m0.127s 00:06:50.013 10:27:38 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.013 10:27:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:50.013 ************************************ 00:06:50.013 END TEST accel_dif_verify 00:06:50.013 ************************************ 00:06:50.013 10:27:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:50.013 10:27:38 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:50.013 10:27:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.013 10:27:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.013 ************************************ 00:06:50.013 START TEST accel_dif_generate 00:06:50.013 ************************************ 00:06:50.013 10:27:38 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:50.013 [2024-07-23 10:27:38.272927] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:50.013 [2024-07-23 10:27:38.272995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730654 ] 00:06:50.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.013 [2024-07-23 10:27:38.332855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.013 [2024-07-23 10:27:38.424054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.013 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.014 10:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:51.389 10:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.389 00:06:51.389 real 0m1.342s 00:06:51.389 user 0m1.220s 00:06:51.389 sys 0m0.126s 00:06:51.389 10:27:39 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.389 10:27:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:51.389 ************************************ 00:06:51.389 END TEST accel_dif_generate 00:06:51.389 ************************************ 00:06:51.389 10:27:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:51.389 10:27:39 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:51.389 10:27:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.389 10:27:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.389 ************************************ 00:06:51.389 START TEST accel_dif_generate_copy 00:06:51.389 ************************************ 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:51.389 [2024-07-23 10:27:39.672697] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:51.389 [2024-07-23 10:27:39.672769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730775 ] 00:06:51.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.389 [2024-07-23 10:27:39.733896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.389 [2024-07-23 10:27:39.824168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.389 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.390 10:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.762 10:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:52.762 10:27:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.762 00:06:52.762 real 0m1.346s 00:06:52.762 user 0m1.219s 00:06:52.762 sys 0m0.128s 00:06:52.762 10:27:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.762 10:27:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.762 ************************************ 00:06:52.762 END TEST accel_dif_generate_copy 00:06:52.762 ************************************ 00:06:52.762 10:27:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:52.762 10:27:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.762 10:27:41 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.762 10:27:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.762 10:27:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.762 ************************************ 00:06:52.762 START TEST accel_comp 00:06:52.762 ************************************ 00:06:52.762 10:27:41 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:52.762 10:27:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:52.762 [2024-07-23 10:27:41.076643] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:52.762 [2024-07-23 10:27:41.076716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730984 ] 00:06:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.762 [2024-07-23 10:27:41.137677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.762 [2024-07-23 10:27:41.228953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.019 10:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:53.951 10:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.951 00:06:53.951 real 0m1.345s 00:06:53.951 user 0m1.225s 00:06:53.951 sys 0m0.122s 00:06:53.951 10:27:42 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.952 10:27:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:53.952 ************************************ 00:06:53.952 END TEST accel_comp 00:06:53.952 ************************************ 00:06:53.952 10:27:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.952 10:27:42 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.952 10:27:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.952 10:27:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.952 ************************************ 00:06:53.952 START TEST accel_decomp 00:06:53.952 ************************************ 00:06:53.952 10:27:42 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:53.952 10:27:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:54.210 [2024-07-23 10:27:42.469153] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:54.210 [2024-07-23 10:27:42.469224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731115 ] 00:06:54.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.210 [2024-07-23 10:27:42.527977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.210 [2024-07-23 10:27:42.619156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.210 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.211 10:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.605 10:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.606 10:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.606 00:06:55.606 real 0m1.344s 00:06:55.606 user 0m1.213s 00:06:55.606 sys 0m0.133s 00:06:55.606 10:27:43 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.606 10:27:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:55.606 ************************************ 00:06:55.606 END TEST accel_decomp 00:06:55.606 ************************************ 00:06:55.606 10:27:43 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.606 10:27:43 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:55.606 10:27:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.606 10:27:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.606 ************************************ 00:06:55.606 START TEST accel_decmop_full 00:06:55.606 ************************************ 00:06:55.606 10:27:43 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:55.606 10:27:43 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:55.606 [2024-07-23 10:27:43.869299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:55.606 [2024-07-23 10:27:43.869372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731235 ] 00:06:55.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.606 [2024-07-23 10:27:43.928141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.606 [2024-07-23 10:27:44.019736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.606 10:27:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.993 10:27:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.993 00:06:56.993 real 0m1.358s 00:06:56.993 user 0m1.225s 00:06:56.993 sys 0m0.135s 00:06:56.993 10:27:45 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.993 10:27:45 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:56.993 ************************************ 00:06:56.993 END TEST accel_decmop_full 00:06:56.994 ************************************ 00:06:56.994 10:27:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.994 10:27:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:56.994 10:27:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.994 10:27:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.994 ************************************ 00:06:56.994 START TEST accel_decomp_mcore 00:06:56.994 ************************************ 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:56.994 [2024-07-23 10:27:45.280596] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.994 [2024-07-23 10:27:45.280667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731378 ] 00:06:56.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.994 [2024-07-23 10:27:45.340017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.994 [2024-07-23 10:27:45.435202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.994 [2024-07-23 10:27:45.435326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.994 [2024-07-23 10:27:45.435329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.994 [2024-07-23 10:27:45.435275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.994 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.252 10:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.187 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.188 00:06:58.188 real 0m1.359s 00:06:58.188 user 0m4.536s 00:06:58.188 sys 0m0.149s 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.188 10:27:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:58.188 ************************************ 00:06:58.188 END TEST accel_decomp_mcore 00:06:58.188 ************************************ 00:06:58.188 10:27:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.188 10:27:46 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:58.188 10:27:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.188 10:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.188 ************************************ 00:06:58.188 START TEST accel_decomp_full_mcore 00:06:58.188 ************************************ 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:58.188 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:58.447 [2024-07-23 10:27:46.696475] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:58.447 [2024-07-23 10:27:46.696557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731572 ] 00:06:58.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.447 [2024-07-23 10:27:46.756087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.447 [2024-07-23 10:27:46.849845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.447 [2024-07-23 10:27:46.849934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.447 [2024-07-23 10:27:46.849983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.447 [2024-07-23 10:27:46.849987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.447 10:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.824 00:06:59.824 real 0m1.361s 00:06:59.824 user 0m4.570s 00:06:59.824 sys 0m0.149s 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.824 10:27:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:59.824 ************************************ 00:06:59.824 END TEST accel_decomp_full_mcore 00:06:59.824 ************************************ 00:06:59.824 10:27:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.824 10:27:48 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:59.824 10:27:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.824 10:27:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.824 ************************************ 00:06:59.824 START TEST accel_decomp_mthread 00:06:59.824 ************************************ 00:06:59.824 10:27:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.824 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:59.824 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:59.824 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:59.825 [2024-07-23 10:27:48.105190] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.825 [2024-07-23 10:27:48.105268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731704 ] 00:06:59.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.825 [2024-07-23 10:27:48.166404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.825 [2024-07-23 10:27:48.258240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.825 10:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.200 00:07:01.200 real 0m1.352s 00:07:01.200 user 0m1.214s 00:07:01.200 sys 0m0.139s 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.200 10:27:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:01.200 ************************************ 00:07:01.200 END TEST accel_decomp_mthread 00:07:01.200 ************************************ 00:07:01.200 10:27:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.200 10:27:49 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.200 10:27:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.200 10:27:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.200 ************************************ 00:07:01.200 START TEST accel_decomp_full_mthread 00:07:01.200 ************************************ 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:01.200 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:01.200 [2024-07-23 10:27:49.511175] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:01.200 [2024-07-23 10:27:49.511249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731829 ] 00:07:01.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.200 [2024-07-23 10:27:49.570044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.201 [2024-07-23 10:27:49.661654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.459 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.460 10:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.396 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.397 00:07:02.397 real 0m1.376s 00:07:02.397 user 0m1.246s 00:07:02.397 sys 0m0.131s 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.397 10:27:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:02.397 ************************************ 00:07:02.397 END TEST accel_decomp_full_mthread 00:07:02.397 ************************************ 00:07:02.397 10:27:50 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:02.397 10:27:50 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.397 10:27:50 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:02.397 10:27:50 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:02.397 10:27:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.397 10:27:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.397 10:27:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.397 10:27:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.397 10:27:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.397 10:27:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.397 10:27:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.397 10:27:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:02.397 10:27:50 accel -- accel/accel.sh@41 -- # jq -r . 00:07:02.655 ************************************ 00:07:02.655 START TEST accel_dif_functional_tests 00:07:02.655 ************************************ 00:07:02.655 10:27:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.655 [2024-07-23 10:27:50.967279] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.655 [2024-07-23 10:27:50.967376] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732034 ] 00:07:02.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.655 [2024-07-23 10:27:51.027895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.655 [2024-07-23 10:27:51.121463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.655 [2024-07-23 10:27:51.121572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.655 [2024-07-23 10:27:51.121605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.914 00:07:02.914 00:07:02.914 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.914 http://cunit.sourceforge.net/ 00:07:02.914 00:07:02.914 00:07:02.914 Suite: accel_dif 00:07:02.914 Test: verify: DIF generated, GUARD check ...passed 00:07:02.914 Test: verify: DIF generated, APPTAG check ...passed 00:07:02.914 Test: verify: DIF generated, REFTAG check ...passed 00:07:02.914 Test: verify: DIF not generated, GUARD check ...[2024-07-23 10:27:51.204718] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.914 passed 00:07:02.914 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 10:27:51.204789] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.914 passed 00:07:02.914 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 10:27:51.204832] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.914 passed 00:07:02.914 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:02.914 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 10:27:51.204920] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:02.914 passed 00:07:02.914 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:02.914 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:02.914 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:02.914 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 10:27:51.205116] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:02.914 passed 00:07:02.914 Test: verify copy: DIF generated, GUARD check ...passed 00:07:02.914 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:02.914 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:02.914 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 10:27:51.205309] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.914 passed 00:07:02.914 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 10:27:51.205363] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.914 passed 00:07:02.914 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 10:27:51.205410] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.914 passed 00:07:02.914 Test: generate copy: DIF generated, GUARD check ...passed 00:07:02.914 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:02.914 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:02.914 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:02.914 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:02.914 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:02.914 Test: generate copy: iovecs-len validate ...[2024-07-23 10:27:51.205706] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:02.914 passed 00:07:02.914 Test: generate copy: buffer alignment validate ...passed 00:07:02.914 00:07:02.914 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.914 suites 1 1 n/a 0 0 00:07:02.914 tests 26 26 26 0 0 00:07:02.914 asserts 115 115 115 0 n/a 00:07:02.914 00:07:02.914 Elapsed time = 0.005 seconds 00:07:02.914 00:07:02.914 real 0m0.430s 00:07:02.914 user 0m0.607s 00:07:02.914 sys 0m0.172s 00:07:02.914 10:27:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.914 10:27:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:02.914 ************************************ 00:07:02.914 END TEST accel_dif_functional_tests 00:07:02.914 ************************************ 00:07:02.914 00:07:02.914 real 0m30.285s 00:07:02.914 user 0m33.578s 00:07:02.914 sys 0m4.355s 00:07:02.914 10:27:51 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.914 10:27:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.914 ************************************ 00:07:02.914 END TEST accel 00:07:02.914 ************************************ 00:07:02.914 10:27:51 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:02.914 10:27:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.914 10:27:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.914 10:27:51 -- common/autotest_common.sh@10 -- # set +x 00:07:03.173 ************************************ 00:07:03.173 START TEST accel_rpc 00:07:03.173 ************************************ 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:03.173 * Looking for test storage... 00:07:03.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:03.173 10:27:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.173 10:27:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3732110 00:07:03.173 10:27:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3732110 00:07:03.173 10:27:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3732110 ']' 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.173 10:27:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.173 [2024-07-23 10:27:51.529775] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.173 [2024-07-23 10:27:51.529875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732110 ] 00:07:03.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.173 [2024-07-23 10:27:51.589857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.432 [2024-07-23 10:27:51.677763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.432 10:27:51 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.432 10:27:51 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:03.432 10:27:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:03.432 10:27:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:03.432 10:27:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:03.432 10:27:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:03.432 10:27:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:03.432 10:27:51 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.432 10:27:51 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.432 10:27:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.432 ************************************ 00:07:03.432 START TEST accel_assign_opcode 00:07:03.432 ************************************ 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.432 [2024-07-23 10:27:51.802597] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.432 [2024-07-23 10:27:51.810580] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.432 10:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.691 software 00:07:03.691 00:07:03.691 real 0m0.268s 00:07:03.691 user 0m0.041s 00:07:03.691 sys 0m0.008s 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.691 10:27:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.691 ************************************ 00:07:03.691 END TEST accel_assign_opcode 00:07:03.691 ************************************ 00:07:03.691 10:27:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3732110 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3732110 ']' 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3732110 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3732110 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3732110' 00:07:03.691 killing process with pid 3732110 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@965 -- # kill 3732110 00:07:03.691 10:27:52 accel_rpc -- common/autotest_common.sh@970 -- # wait 3732110 00:07:03.951 00:07:03.951 real 0m0.965s 00:07:03.951 user 0m0.964s 00:07:03.951 sys 0m0.392s 00:07:03.951 10:27:52 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.951 10:27:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.951 ************************************ 00:07:03.951 END TEST accel_rpc 00:07:03.951 ************************************ 00:07:03.951 10:27:52 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.951 10:27:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.951 10:27:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.951 10:27:52 -- common/autotest_common.sh@10 -- # set +x 00:07:03.951 ************************************ 00:07:03.951 START TEST app_cmdline 00:07:03.951 ************************************ 00:07:03.951 10:27:52 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.210 * Looking for test storage... 00:07:04.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.210 10:27:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.210 10:27:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3732278 00:07:04.210 10:27:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.210 10:27:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3732278 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3732278 ']' 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.210 10:27:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 [2024-07-23 10:27:52.550044] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:04.210 [2024-07-23 10:27:52.550152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732278 ] 00:07:04.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.210 [2024-07-23 10:27:52.612774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.210 [2024-07-23 10:27:52.700765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.469 10:27:52 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.469 10:27:52 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:04.469 10:27:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.727 { 00:07:04.727 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:04.727 "fields": { 00:07:04.727 "major": 24, 00:07:04.727 "minor": 5, 00:07:04.727 "patch": 1, 00:07:04.727 "suffix": "-pre", 00:07:04.727 "commit": "241d0f3c9" 00:07:04.727 } 00:07:04.727 } 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.727 10:27:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.727 10:27:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.727 10:27:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.727 10:27:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.984 10:27:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.984 10:27:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.984 10:27:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.984 10:27:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.985 10:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.985 10:27:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.985 10:27:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.985 10:27:53 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.243 request: 00:07:05.243 { 00:07:05.243 "method": "env_dpdk_get_mem_stats", 00:07:05.243 "req_id": 1 00:07:05.243 } 00:07:05.243 Got JSON-RPC error response 00:07:05.243 response: 00:07:05.243 { 00:07:05.243 "code": -32601, 00:07:05.243 "message": "Method not found" 00:07:05.243 } 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.243 10:27:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3732278 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3732278 ']' 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3732278 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3732278 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3732278' 00:07:05.243 killing process with pid 3732278 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@965 -- # kill 3732278 00:07:05.243 10:27:53 app_cmdline -- common/autotest_common.sh@970 -- # wait 3732278 00:07:05.502 00:07:05.502 real 0m1.397s 00:07:05.502 user 0m1.878s 00:07:05.502 sys 0m0.431s 00:07:05.502 10:27:53 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.502 10:27:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.502 ************************************ 00:07:05.502 END TEST app_cmdline 00:07:05.502 ************************************ 00:07:05.502 10:27:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.502 10:27:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.502 10:27:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.502 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:07:05.502 ************************************ 00:07:05.502 START TEST version 00:07:05.502 ************************************ 00:07:05.502 10:27:53 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.502 * Looking for test storage... 00:07:05.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.502 10:27:53 version -- app/version.sh@17 -- # get_header_version major 00:07:05.502 10:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # cut -f2 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.502 10:27:53 version -- app/version.sh@17 -- # major=24 00:07:05.502 10:27:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.502 10:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # cut -f2 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.502 10:27:53 version -- app/version.sh@18 -- # minor=5 00:07:05.502 10:27:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.502 10:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # cut -f2 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.502 10:27:53 version -- app/version.sh@19 -- # patch=1 00:07:05.502 10:27:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.502 10:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # cut -f2 00:07:05.502 10:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.502 10:27:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.502 10:27:53 version -- app/version.sh@22 -- # version=24.5 00:07:05.502 10:27:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.502 10:27:53 version -- app/version.sh@25 -- # version=24.5.1 00:07:05.502 10:27:53 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:05.502 10:27:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.502 10:27:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.502 10:27:54 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:05.502 10:27:54 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:05.762 00:07:05.762 real 0m0.116s 00:07:05.762 user 0m0.067s 00:07:05.762 sys 0m0.070s 00:07:05.762 10:27:54 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.762 10:27:54 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.762 ************************************ 00:07:05.762 END TEST version 00:07:05.762 ************************************ 00:07:05.762 10:27:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@198 -- # uname -s 00:07:05.762 10:27:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:05.762 10:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.762 10:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.762 10:27:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:05.762 10:27:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.762 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:07:05.762 10:27:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:05.762 10:27:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:05.762 10:27:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.762 10:27:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.762 10:27:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.762 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:07:05.762 ************************************ 00:07:05.762 START TEST nvmf_tcp 00:07:05.762 ************************************ 00:07:05.762 10:27:54 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.762 * Looking for test storage... 00:07:05.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.762 10:27:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.762 10:27:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.762 10:27:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.762 10:27:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.762 10:27:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.762 10:27:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.762 10:27:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:05.762 10:27:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.762 10:27:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.763 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.763 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:05.763 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:05.763 10:27:54 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.763 10:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.763 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:05.763 10:27:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.763 10:27:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.763 10:27:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.763 10:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.763 ************************************ 00:07:05.763 START TEST nvmf_example 00:07:05.763 ************************************ 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.763 * Looking for test storage... 00:07:05.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.763 10:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:07.667 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:07.668 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:07.668 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:07.668 Found net devices under 0000:08:00.0: cvl_0_0 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:07.668 Found net devices under 0000:08:00.1: cvl_0_1 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.668 10:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:07:07.668 00:07:07.668 --- 10.0.0.2 ping statistics --- 00:07:07.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.668 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:07:07.668 00:07:07.668 --- 10.0.0.1 ping statistics --- 00:07:07.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.668 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3733775 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3733775 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3733775 ']' 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.668 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.927 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.927 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.927 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:07.927 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:07.927 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.927 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:08.185 10:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:08.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.394 Initializing NVMe Controllers 00:07:20.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:20.394 Initialization complete. Launching workers. 00:07:20.394 ======================================================== 00:07:20.394 Latency(us) 00:07:20.394 Device Information : IOPS MiB/s Average min max 00:07:20.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13725.37 53.61 4662.64 1048.22 16322.33 00:07:20.394 ======================================================== 00:07:20.394 Total : 13725.37 53.61 4662.64 1048.22 16322.33 00:07:20.394 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.394 rmmod nvme_tcp 00:07:20.394 rmmod nvme_fabrics 00:07:20.394 rmmod nvme_keyring 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3733775 ']' 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3733775 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3733775 ']' 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3733775 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3733775 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3733775' 00:07:20.394 killing process with pid 3733775 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3733775 00:07:20.394 10:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3733775 00:07:20.394 nvmf threads initialize successfully 00:07:20.394 bdev subsystem init successfully 00:07:20.394 created a nvmf target service 00:07:20.394 create targets's poll groups done 00:07:20.394 all subsystems of target started 00:07:20.394 nvmf target is running 00:07:20.394 all subsystems of target stopped 00:07:20.394 destroy targets's poll groups done 00:07:20.394 destroyed the nvmf target service 00:07:20.394 bdev subsystem finish successfully 00:07:20.394 nvmf threads destroy successfully 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.394 10:28:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.654 00:07:20.654 real 0m14.922s 00:07:20.654 user 0m40.541s 00:07:20.654 sys 0m3.751s 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.654 10:28:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.654 ************************************ 00:07:20.654 END TEST nvmf_example 00:07:20.654 ************************************ 00:07:20.654 10:28:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:20.654 10:28:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:20.654 10:28:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.654 10:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.916 ************************************ 00:07:20.916 START TEST nvmf_filesystem 00:07:20.916 ************************************ 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:20.916 * Looking for test storage... 00:07:20.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:20.916 10:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:20.917 #define SPDK_CONFIG_H 00:07:20.917 #define SPDK_CONFIG_APPS 1 00:07:20.917 #define SPDK_CONFIG_ARCH native 00:07:20.917 #undef SPDK_CONFIG_ASAN 00:07:20.917 #undef SPDK_CONFIG_AVAHI 00:07:20.917 #undef SPDK_CONFIG_CET 00:07:20.917 #define SPDK_CONFIG_COVERAGE 1 00:07:20.917 #define SPDK_CONFIG_CROSS_PREFIX 00:07:20.917 #undef SPDK_CONFIG_CRYPTO 00:07:20.917 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:20.917 #undef SPDK_CONFIG_CUSTOMOCF 00:07:20.917 #undef SPDK_CONFIG_DAOS 00:07:20.917 #define SPDK_CONFIG_DAOS_DIR 00:07:20.917 #define SPDK_CONFIG_DEBUG 1 00:07:20.917 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:20.917 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:20.917 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:20.917 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:20.917 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:20.917 #undef SPDK_CONFIG_DPDK_UADK 00:07:20.917 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:20.917 #define SPDK_CONFIG_EXAMPLES 1 00:07:20.917 #undef SPDK_CONFIG_FC 00:07:20.917 #define SPDK_CONFIG_FC_PATH 00:07:20.917 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:20.917 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:20.917 #undef SPDK_CONFIG_FUSE 00:07:20.917 #undef SPDK_CONFIG_FUZZER 00:07:20.917 #define SPDK_CONFIG_FUZZER_LIB 00:07:20.917 #undef SPDK_CONFIG_GOLANG 00:07:20.917 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:20.917 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:20.917 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:20.917 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:20.917 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:20.917 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:20.917 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:20.917 #define SPDK_CONFIG_IDXD 1 00:07:20.917 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:20.917 #undef SPDK_CONFIG_IPSEC_MB 00:07:20.917 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:20.917 #define SPDK_CONFIG_ISAL 1 00:07:20.917 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:20.917 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:20.917 #define SPDK_CONFIG_LIBDIR 00:07:20.917 #undef SPDK_CONFIG_LTO 00:07:20.917 #define SPDK_CONFIG_MAX_LCORES 00:07:20.917 #define SPDK_CONFIG_NVME_CUSE 1 00:07:20.917 #undef SPDK_CONFIG_OCF 00:07:20.917 #define SPDK_CONFIG_OCF_PATH 00:07:20.917 #define SPDK_CONFIG_OPENSSL_PATH 00:07:20.917 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:20.917 #define SPDK_CONFIG_PGO_DIR 00:07:20.917 #undef SPDK_CONFIG_PGO_USE 00:07:20.917 #define SPDK_CONFIG_PREFIX /usr/local 00:07:20.917 #undef SPDK_CONFIG_RAID5F 00:07:20.917 #undef SPDK_CONFIG_RBD 00:07:20.917 #define SPDK_CONFIG_RDMA 1 00:07:20.917 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:20.917 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:20.917 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:20.917 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:20.917 #define SPDK_CONFIG_SHARED 1 00:07:20.917 #undef SPDK_CONFIG_SMA 00:07:20.917 #define SPDK_CONFIG_TESTS 1 00:07:20.917 #undef SPDK_CONFIG_TSAN 00:07:20.917 #define SPDK_CONFIG_UBLK 1 00:07:20.917 #define SPDK_CONFIG_UBSAN 1 00:07:20.917 #undef SPDK_CONFIG_UNIT_TESTS 00:07:20.917 #undef SPDK_CONFIG_URING 00:07:20.917 #define SPDK_CONFIG_URING_PATH 00:07:20.917 #undef SPDK_CONFIG_URING_ZNS 00:07:20.917 #undef SPDK_CONFIG_USDT 00:07:20.917 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:20.917 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:20.917 #define SPDK_CONFIG_VFIO_USER 1 00:07:20.917 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:20.917 #define SPDK_CONFIG_VHOST 1 00:07:20.917 #define SPDK_CONFIG_VIRTIO 1 00:07:20.917 #undef SPDK_CONFIG_VTUNE 00:07:20.917 #define SPDK_CONFIG_VTUNE_DIR 00:07:20.917 #define SPDK_CONFIG_WERROR 1 00:07:20.917 #define SPDK_CONFIG_WPDK_DIR 00:07:20.917 #undef SPDK_CONFIG_XNVME 00:07:20.917 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:20.917 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:20.918 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j32 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3735087 ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3735087 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.R08jyT 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.R08jyT/tests/target /tmp/spdk.R08jyT 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1957711872 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3326717952 00:07:20.919 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=41609175040 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=53546168320 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=11936993280 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26768371712 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773082112 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=10700750848 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=10709233664 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8482816 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26772619264 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773086208 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=466944 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=5354610688 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5354614784 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:20.920 * Looking for test storage... 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=41609175040 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=14151585792 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.920 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.921 10:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.828 10:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.828 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.828 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:22.829 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:22.829 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:22.829 Found net devices under 0000:08:00.0: cvl_0_0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:22.829 Found net devices under 0000:08:00.1: cvl_0_1 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:07:22.829 00:07:22.829 --- 10.0.0.2 ping statistics --- 00:07:22.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.829 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:22.829 00:07:22.829 --- 10.0.0.1 ping statistics --- 00:07:22.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.829 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 ************************************ 00:07:22.829 START TEST nvmf_filesystem_no_in_capsule 00:07:22.829 ************************************ 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3736337 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3736337 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3736337 ']' 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.829 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 [2024-07-23 10:28:11.244041] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:22.829 [2024-07-23 10:28:11.244141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.829 [2024-07-23 10:28:11.309301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.088 [2024-07-23 10:28:11.400356] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.088 [2024-07-23 10:28:11.400418] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.088 [2024-07-23 10:28:11.400434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.088 [2024-07-23 10:28:11.400447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.088 [2024-07-23 10:28:11.400459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.088 [2024-07-23 10:28:11.400553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.088 [2024-07-23 10:28:11.400658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.088 [2024-07-23 10:28:11.400737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.088 [2024-07-23 10:28:11.400740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.088 [2024-07-23 10:28:11.544015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.088 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 Malloc1 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.346 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.347 [2024-07-23 10:28:11.706709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:23.347 { 00:07:23.347 "name": "Malloc1", 00:07:23.347 "aliases": [ 00:07:23.347 "c7270209-3c89-41d4-adfa-57283cc42918" 00:07:23.347 ], 00:07:23.347 "product_name": "Malloc disk", 00:07:23.347 "block_size": 512, 00:07:23.347 "num_blocks": 1048576, 00:07:23.347 "uuid": "c7270209-3c89-41d4-adfa-57283cc42918", 00:07:23.347 "assigned_rate_limits": { 00:07:23.347 "rw_ios_per_sec": 0, 00:07:23.347 "rw_mbytes_per_sec": 0, 00:07:23.347 "r_mbytes_per_sec": 0, 00:07:23.347 "w_mbytes_per_sec": 0 00:07:23.347 }, 00:07:23.347 "claimed": true, 00:07:23.347 "claim_type": "exclusive_write", 00:07:23.347 "zoned": false, 00:07:23.347 "supported_io_types": { 00:07:23.347 "read": true, 00:07:23.347 "write": true, 00:07:23.347 "unmap": true, 00:07:23.347 "write_zeroes": true, 00:07:23.347 "flush": true, 00:07:23.347 "reset": true, 00:07:23.347 "compare": false, 00:07:23.347 "compare_and_write": false, 00:07:23.347 "abort": true, 00:07:23.347 "nvme_admin": false, 00:07:23.347 "nvme_io": false 00:07:23.347 }, 00:07:23.347 "memory_domains": [ 00:07:23.347 { 00:07:23.347 "dma_device_id": "system", 00:07:23.347 "dma_device_type": 1 00:07:23.347 }, 00:07:23.347 { 00:07:23.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.347 "dma_device_type": 2 00:07:23.347 } 00:07:23.347 ], 00:07:23.347 "driver_specific": {} 00:07:23.347 } 00:07:23.347 ]' 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.347 10:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.911 10:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.911 10:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:23.912 10:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.912 10:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:23.912 10:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:25.808 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.373 10:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:27.306 10:28:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.236 ************************************ 00:07:28.236 START TEST filesystem_ext4 00:07:28.236 ************************************ 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:28.236 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.236 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.237 Discarding device blocks: 0/522240 done 00:07:28.237 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.237 Filesystem UUID: dfd74cae-22c9-421c-846b-5b8ee79c9019 00:07:28.237 Superblock backups stored on blocks: 00:07:28.237 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.237 00:07:28.237 Allocating group tables: 0/64 done 00:07:28.237 Writing inode tables: 0/64 done 00:07:28.494 Creating journal (8192 blocks): done 00:07:28.494 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.494 00:07:28.494 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:28.494 10:28:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3736337 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.752 00:07:28.752 real 0m0.732s 00:07:28.752 user 0m0.021s 00:07:28.752 sys 0m0.051s 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.752 ************************************ 00:07:28.752 END TEST filesystem_ext4 00:07:28.752 ************************************ 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.752 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.017 ************************************ 00:07:29.017 START TEST filesystem_btrfs 00:07:29.017 ************************************ 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.017 btrfs-progs v6.6.2 00:07:29.017 See https://btrfs.readthedocs.io for more information. 00:07:29.017 00:07:29.017 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.017 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.017 this does not affect your deployments: 00:07:29.017 - DUP for metadata (-m dup) 00:07:29.017 - enabled no-holes (-O no-holes) 00:07:29.017 - enabled free-space-tree (-R free-space-tree) 00:07:29.017 00:07:29.017 Label: (null) 00:07:29.017 UUID: b86ae013-b450-4e53-b12d-3200957210bb 00:07:29.017 Node size: 16384 00:07:29.017 Sector size: 4096 00:07:29.017 Filesystem size: 510.00MiB 00:07:29.017 Block group profiles: 00:07:29.017 Data: single 8.00MiB 00:07:29.017 Metadata: DUP 32.00MiB 00:07:29.017 System: DUP 8.00MiB 00:07:29.017 SSD detected: yes 00:07:29.017 Zoned device: no 00:07:29.017 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.017 Runtime features: free-space-tree 00:07:29.017 Checksum: crc32c 00:07:29.017 Number of devices: 1 00:07:29.017 Devices: 00:07:29.017 ID SIZE PATH 00:07:29.017 1 510.00MiB /dev/nvme0n1p1 00:07:29.017 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:29.017 10:28:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3736337 00:07:30.032 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.033 00:07:30.033 real 0m1.171s 00:07:30.033 user 0m0.010s 00:07:30.033 sys 0m0.126s 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.033 ************************************ 00:07:30.033 END TEST filesystem_btrfs 00:07:30.033 ************************************ 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.033 ************************************ 00:07:30.033 START TEST filesystem_xfs 00:07:30.033 ************************************ 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:30.033 10:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.291 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.291 = sectsz=512 attr=2, projid32bit=1 00:07:30.291 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.291 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.291 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.291 = sunit=0 swidth=0 blks 00:07:30.291 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.291 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.291 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.291 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.854 Discarding blocks...Done. 00:07:30.854 10:28:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:30.854 10:28:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3736337 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.387 00:07:33.387 real 0m2.944s 00:07:33.387 user 0m0.019s 00:07:33.387 sys 0m0.060s 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.387 ************************************ 00:07:33.387 END TEST filesystem_xfs 00:07:33.387 ************************************ 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:33.387 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3736337 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3736337 ']' 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3736337 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3736337 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3736337' 00:07:33.647 killing process with pid 3736337 00:07:33.647 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3736337 00:07:33.648 10:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3736337 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.909 00:07:33.909 real 0m11.034s 00:07:33.909 user 0m42.292s 00:07:33.909 sys 0m1.731s 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 ************************************ 00:07:33.909 END TEST nvmf_filesystem_no_in_capsule 00:07:33.909 ************************************ 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 ************************************ 00:07:33.909 START TEST nvmf_filesystem_in_capsule 00:07:33.909 ************************************ 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3737570 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3737570 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3737570 ']' 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.909 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 [2024-07-23 10:28:22.337236] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:33.909 [2024-07-23 10:28:22.337330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.909 [2024-07-23 10:28:22.403596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.170 [2024-07-23 10:28:22.495029] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.170 [2024-07-23 10:28:22.495096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.170 [2024-07-23 10:28:22.495121] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.170 [2024-07-23 10:28:22.495141] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.170 [2024-07-23 10:28:22.495157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.170 [2024-07-23 10:28:22.495243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.170 [2024-07-23 10:28:22.495298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.170 [2024-07-23 10:28:22.495353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.170 [2024-07-23 10:28:22.495360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.170 [2024-07-23 10:28:22.648174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.170 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.431 Malloc1 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.431 [2024-07-23 10:28:22.811171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:34.431 { 00:07:34.431 "name": "Malloc1", 00:07:34.431 "aliases": [ 00:07:34.431 "4a64f52a-cfe4-452e-bbea-d0530694d670" 00:07:34.431 ], 00:07:34.431 "product_name": "Malloc disk", 00:07:34.431 "block_size": 512, 00:07:34.431 "num_blocks": 1048576, 00:07:34.431 "uuid": "4a64f52a-cfe4-452e-bbea-d0530694d670", 00:07:34.431 "assigned_rate_limits": { 00:07:34.431 "rw_ios_per_sec": 0, 00:07:34.431 "rw_mbytes_per_sec": 0, 00:07:34.431 "r_mbytes_per_sec": 0, 00:07:34.431 "w_mbytes_per_sec": 0 00:07:34.431 }, 00:07:34.431 "claimed": true, 00:07:34.431 "claim_type": "exclusive_write", 00:07:34.431 "zoned": false, 00:07:34.431 "supported_io_types": { 00:07:34.431 "read": true, 00:07:34.431 "write": true, 00:07:34.431 "unmap": true, 00:07:34.431 "write_zeroes": true, 00:07:34.431 "flush": true, 00:07:34.431 "reset": true, 00:07:34.431 "compare": false, 00:07:34.431 "compare_and_write": false, 00:07:34.431 "abort": true, 00:07:34.431 "nvme_admin": false, 00:07:34.431 "nvme_io": false 00:07:34.431 }, 00:07:34.431 "memory_domains": [ 00:07:34.431 { 00:07:34.431 "dma_device_id": "system", 00:07:34.431 "dma_device_type": 1 00:07:34.431 }, 00:07:34.431 { 00:07:34.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.431 "dma_device_type": 2 00:07:34.431 } 00:07:34.431 ], 00:07:34.431 "driver_specific": {} 00:07:34.431 } 00:07:34.431 ]' 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.431 10:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.001 10:28:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.001 10:28:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:35.001 10:28:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.001 10:28:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:35.001 10:28:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:36.910 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:37.169 10:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:37.738 10:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.118 ************************************ 00:07:39.118 START TEST filesystem_in_capsule_ext4 00:07:39.118 ************************************ 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:39.118 10:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.118 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.118 Discarding device blocks: 0/522240 done 00:07:39.118 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.118 Filesystem UUID: 22630486-4b99-4f45-b424-ddc97b9c312a 00:07:39.118 Superblock backups stored on blocks: 00:07:39.118 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.118 00:07:39.118 Allocating group tables: 0/64 done 00:07:39.118 Writing inode tables: 0/64 done 00:07:39.118 Creating journal (8192 blocks): done 00:07:39.948 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:39.948 00:07:39.948 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:39.948 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3737570 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.517 10:28:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.517 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.517 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.517 00:07:40.517 real 0m1.794s 00:07:40.517 user 0m0.020s 00:07:40.517 sys 0m0.052s 00:07:40.517 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.517 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.517 ************************************ 00:07:40.517 END TEST filesystem_in_capsule_ext4 00:07:40.517 ************************************ 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.777 ************************************ 00:07:40.777 START TEST filesystem_in_capsule_btrfs 00:07:40.777 ************************************ 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:40.777 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.036 btrfs-progs v6.6.2 00:07:41.036 See https://btrfs.readthedocs.io for more information. 00:07:41.036 00:07:41.036 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.036 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.036 this does not affect your deployments: 00:07:41.036 - DUP for metadata (-m dup) 00:07:41.036 - enabled no-holes (-O no-holes) 00:07:41.036 - enabled free-space-tree (-R free-space-tree) 00:07:41.036 00:07:41.036 Label: (null) 00:07:41.036 UUID: 266df961-5d02-4f89-bdb9-f31721265c4f 00:07:41.036 Node size: 16384 00:07:41.036 Sector size: 4096 00:07:41.036 Filesystem size: 510.00MiB 00:07:41.036 Block group profiles: 00:07:41.036 Data: single 8.00MiB 00:07:41.036 Metadata: DUP 32.00MiB 00:07:41.036 System: DUP 8.00MiB 00:07:41.036 SSD detected: yes 00:07:41.036 Zoned device: no 00:07:41.036 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.036 Runtime features: free-space-tree 00:07:41.036 Checksum: crc32c 00:07:41.036 Number of devices: 1 00:07:41.036 Devices: 00:07:41.036 ID SIZE PATH 00:07:41.036 1 510.00MiB /dev/nvme0n1p1 00:07:41.036 00:07:41.036 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:41.036 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3737570 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.297 00:07:41.297 real 0m0.656s 00:07:41.297 user 0m0.024s 00:07:41.297 sys 0m0.106s 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 ************************************ 00:07:41.297 END TEST filesystem_in_capsule_btrfs 00:07:41.297 ************************************ 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 ************************************ 00:07:41.297 START TEST filesystem_in_capsule_xfs 00:07:41.297 ************************************ 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:41.297 10:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.557 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.557 = sectsz=512 attr=2, projid32bit=1 00:07:41.557 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.557 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.557 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.558 = sunit=0 swidth=0 blks 00:07:41.558 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.558 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.558 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.558 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.496 Discarding blocks...Done. 00:07:42.496 10:28:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:42.496 10:28:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3737570 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.401 00:07:44.401 real 0m2.921s 00:07:44.401 user 0m0.030s 00:07:44.401 sys 0m0.047s 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 ************************************ 00:07:44.401 END TEST filesystem_in_capsule_xfs 00:07:44.401 ************************************ 00:07:44.401 10:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3737570 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3737570 ']' 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3737570 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3737570 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3737570' 00:07:44.662 killing process with pid 3737570 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3737570 00:07:44.662 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3737570 00:07:44.924 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.924 00:07:44.924 real 0m11.140s 00:07:44.924 user 0m42.688s 00:07:44.924 sys 0m1.742s 00:07:44.924 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.924 10:28:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.924 ************************************ 00:07:44.924 END TEST nvmf_filesystem_in_capsule 00:07:44.924 ************************************ 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.185 rmmod nvme_tcp 00:07:45.185 rmmod nvme_fabrics 00:07:45.185 rmmod nvme_keyring 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.185 10:28:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.095 10:28:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.095 00:07:47.095 real 0m26.394s 00:07:47.095 user 1m25.797s 00:07:47.095 sys 0m4.866s 00:07:47.095 10:28:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.095 10:28:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.095 ************************************ 00:07:47.095 END TEST nvmf_filesystem 00:07:47.095 ************************************ 00:07:47.095 10:28:35 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.095 10:28:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:47.095 10:28:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.095 10:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.354 ************************************ 00:07:47.354 START TEST nvmf_target_discovery 00:07:47.354 ************************************ 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:47.354 * Looking for test storage... 00:07:47.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.354 10:28:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.355 10:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.266 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.266 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.266 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.266 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.266 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:49.267 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:49.267 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:49.267 Found net devices under 0000:08:00.0: cvl_0_0 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:49.267 Found net devices under 0000:08:00.1: cvl_0_1 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:07:49.267 00:07:49.267 --- 10.0.0.2 ping statistics --- 00:07:49.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.267 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:49.267 00:07:49.267 --- 10.0.0.1 ping statistics --- 00:07:49.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.267 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3740289 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3740289 00:07:49.267 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3740289 ']' 00:07:49.268 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.268 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.268 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.268 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.268 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 [2024-07-23 10:28:37.588127] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:49.268 [2024-07-23 10:28:37.588226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.268 [2024-07-23 10:28:37.652794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.268 [2024-07-23 10:28:37.740680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.268 [2024-07-23 10:28:37.740743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.268 [2024-07-23 10:28:37.740759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.268 [2024-07-23 10:28:37.740773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.268 [2024-07-23 10:28:37.740785] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.268 [2024-07-23 10:28:37.740893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.268 [2024-07-23 10:28:37.741048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.268 [2024-07-23 10:28:37.741100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.268 [2024-07-23 10:28:37.741097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 [2024-07-23 10:28:37.882133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 Null1 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.526 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.526 [2024-07-23 10:28:37.926451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 Null2 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 Null3 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 Null4 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.527 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:07:49.785 00:07:49.785 Discovery Log Number of Records 6, Generation counter 6 00:07:49.785 =====Discovery Log Entry 0====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: current discovery subsystem 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4420 00:07:49.785 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: explicit discovery connections, duplicate discovery information 00:07:49.785 sectype: none 00:07:49.785 =====Discovery Log Entry 1====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: nvme subsystem 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4420 00:07:49.785 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: none 00:07:49.785 sectype: none 00:07:49.785 =====Discovery Log Entry 2====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: nvme subsystem 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4420 00:07:49.785 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: none 00:07:49.785 sectype: none 00:07:49.785 =====Discovery Log Entry 3====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: nvme subsystem 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4420 00:07:49.785 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: none 00:07:49.785 sectype: none 00:07:49.785 =====Discovery Log Entry 4====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: nvme subsystem 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4420 00:07:49.785 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: none 00:07:49.785 sectype: none 00:07:49.785 =====Discovery Log Entry 5====== 00:07:49.785 trtype: tcp 00:07:49.785 adrfam: ipv4 00:07:49.785 subtype: discovery subsystem referral 00:07:49.785 treq: not required 00:07:49.785 portid: 0 00:07:49.785 trsvcid: 4430 00:07:49.785 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:49.785 traddr: 10.0.0.2 00:07:49.785 eflags: none 00:07:49.785 sectype: none 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:49.785 Perform nvmf subsystem discovery via RPC 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.785 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.785 [ 00:07:49.785 { 00:07:49.785 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:49.785 "subtype": "Discovery", 00:07:49.785 "listen_addresses": [ 00:07:49.785 { 00:07:49.785 "trtype": "TCP", 00:07:49.786 "adrfam": "IPv4", 00:07:49.786 "traddr": "10.0.0.2", 00:07:49.786 "trsvcid": "4420" 00:07:49.786 } 00:07:49.786 ], 00:07:49.786 "allow_any_host": true, 00:07:49.786 "hosts": [] 00:07:49.786 }, 00:07:49.786 { 00:07:49.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.786 "subtype": "NVMe", 00:07:49.786 "listen_addresses": [ 00:07:49.786 { 00:07:49.786 "trtype": "TCP", 00:07:49.786 "adrfam": "IPv4", 00:07:49.786 "traddr": "10.0.0.2", 00:07:49.786 "trsvcid": "4420" 00:07:49.786 } 00:07:49.786 ], 00:07:49.786 "allow_any_host": true, 00:07:49.786 "hosts": [], 00:07:49.786 "serial_number": "SPDK00000000000001", 00:07:49.786 "model_number": "SPDK bdev Controller", 00:07:49.786 "max_namespaces": 32, 00:07:49.786 "min_cntlid": 1, 00:07:49.786 "max_cntlid": 65519, 00:07:49.786 "namespaces": [ 00:07:49.786 { 00:07:49.786 "nsid": 1, 00:07:49.786 "bdev_name": "Null1", 00:07:49.786 "name": "Null1", 00:07:49.786 "nguid": "9BE78104FBD94CDABAE97E2D64240665", 00:07:49.786 "uuid": "9be78104-fbd9-4cda-bae9-7e2d64240665" 00:07:49.786 } 00:07:49.786 ] 00:07:49.786 }, 00:07:49.786 { 00:07:49.786 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:49.786 "subtype": "NVMe", 00:07:49.786 "listen_addresses": [ 00:07:49.786 { 00:07:49.786 "trtype": "TCP", 00:07:49.786 "adrfam": "IPv4", 00:07:49.786 "traddr": "10.0.0.2", 00:07:49.786 "trsvcid": "4420" 00:07:49.786 } 00:07:49.786 ], 00:07:49.786 "allow_any_host": true, 00:07:49.786 "hosts": [], 00:07:49.786 "serial_number": "SPDK00000000000002", 00:07:49.786 "model_number": "SPDK bdev Controller", 00:07:49.786 "max_namespaces": 32, 00:07:49.786 "min_cntlid": 1, 00:07:49.786 "max_cntlid": 65519, 00:07:49.786 "namespaces": [ 00:07:49.786 { 00:07:49.786 "nsid": 1, 00:07:49.786 "bdev_name": "Null2", 00:07:49.786 "name": "Null2", 00:07:49.786 "nguid": "F8D30D4860C846C3AB30CE01499043C0", 00:07:49.786 "uuid": "f8d30d48-60c8-46c3-ab30-ce01499043c0" 00:07:49.786 } 00:07:49.786 ] 00:07:49.786 }, 00:07:49.786 { 00:07:49.786 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:49.786 "subtype": "NVMe", 00:07:49.786 "listen_addresses": [ 00:07:49.786 { 00:07:49.786 "trtype": "TCP", 00:07:49.786 "adrfam": "IPv4", 00:07:49.786 "traddr": "10.0.0.2", 00:07:49.786 "trsvcid": "4420" 00:07:49.786 } 00:07:49.786 ], 00:07:49.786 "allow_any_host": true, 00:07:49.786 "hosts": [], 00:07:49.786 "serial_number": "SPDK00000000000003", 00:07:49.786 "model_number": "SPDK bdev Controller", 00:07:49.786 "max_namespaces": 32, 00:07:49.786 "min_cntlid": 1, 00:07:49.786 "max_cntlid": 65519, 00:07:49.786 "namespaces": [ 00:07:49.786 { 00:07:49.786 "nsid": 1, 00:07:49.786 "bdev_name": "Null3", 00:07:49.786 "name": "Null3", 00:07:49.786 "nguid": "CACEE0A0571A48FBA06FE457B5ADD48F", 00:07:49.786 "uuid": "cacee0a0-571a-48fb-a06f-e457b5add48f" 00:07:49.786 } 00:07:49.786 ] 00:07:49.786 }, 00:07:49.786 { 00:07:49.786 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:49.786 "subtype": "NVMe", 00:07:49.786 "listen_addresses": [ 00:07:49.786 { 00:07:49.786 "trtype": "TCP", 00:07:49.786 "adrfam": "IPv4", 00:07:49.786 "traddr": "10.0.0.2", 00:07:49.786 "trsvcid": "4420" 00:07:49.786 } 00:07:49.786 ], 00:07:49.786 "allow_any_host": true, 00:07:49.786 "hosts": [], 00:07:49.786 "serial_number": "SPDK00000000000004", 00:07:49.786 "model_number": "SPDK bdev Controller", 00:07:49.786 "max_namespaces": 32, 00:07:49.786 "min_cntlid": 1, 00:07:49.786 "max_cntlid": 65519, 00:07:49.786 "namespaces": [ 00:07:49.786 { 00:07:49.786 "nsid": 1, 00:07:49.786 "bdev_name": "Null4", 00:07:49.786 "name": "Null4", 00:07:49.786 "nguid": "F3CD4158E302484AB26C3224DACD15BC", 00:07:49.786 "uuid": "f3cd4158-e302-484a-b26c-3224dacd15bc" 00:07:49.786 } 00:07:49.786 ] 00:07:49.786 } 00:07:49.786 ] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:50.046 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.047 rmmod nvme_tcp 00:07:50.047 rmmod nvme_fabrics 00:07:50.047 rmmod nvme_keyring 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3740289 ']' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3740289 ']' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3740289' 00:07:50.047 killing process with pid 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3740289 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.047 10:28:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.587 10:28:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.587 00:07:52.587 real 0m4.981s 00:07:52.587 user 0m3.992s 00:07:52.587 sys 0m1.585s 00:07:52.587 10:28:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.587 10:28:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.587 ************************************ 00:07:52.587 END TEST nvmf_target_discovery 00:07:52.587 ************************************ 00:07:52.587 10:28:40 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.587 10:28:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.587 10:28:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.587 10:28:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.587 ************************************ 00:07:52.587 START TEST nvmf_referrals 00:07:52.587 ************************************ 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.587 * Looking for test storage... 00:07:52.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.587 10:28:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.588 10:28:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.969 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:53.970 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:53.970 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:53.970 Found net devices under 0000:08:00.0: cvl_0_0 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:53.970 Found net devices under 0000:08:00.1: cvl_0_1 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:07:53.970 00:07:53.970 --- 10.0.0.2 ping statistics --- 00:07:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.970 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:53.970 00:07:53.970 --- 10.0.0.1 ping statistics --- 00:07:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.970 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3741829 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3741829 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3741829 ']' 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.970 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 [2024-07-23 10:28:42.509462] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:54.294 [2024-07-23 10:28:42.509566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.294 [2024-07-23 10:28:42.575687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.294 [2024-07-23 10:28:42.667289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.294 [2024-07-23 10:28:42.667356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.294 [2024-07-23 10:28:42.667372] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.294 [2024-07-23 10:28:42.667385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.294 [2024-07-23 10:28:42.667396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.294 [2024-07-23 10:28:42.667455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.294 [2024-07-23 10:28:42.667595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.294 [2024-07-23 10:28:42.667630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.294 [2024-07-23 10:28:42.667508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.294 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.294 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:54.294 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.294 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.294 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 [2024-07-23 10:28:42.811125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 [2024-07-23 10:28:42.823333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.552 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.553 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.553 10:28:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.811 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.070 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.328 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.586 10:28:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.586 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:55.843 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.844 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.104 rmmod nvme_tcp 00:07:56.104 rmmod nvme_fabrics 00:07:56.104 rmmod nvme_keyring 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3741829 ']' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3741829 ']' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741829' 00:07:56.104 killing process with pid 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3741829 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.104 10:28:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.717 10:28:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.717 00:07:58.717 real 0m5.993s 00:07:58.717 user 0m8.952s 00:07:58.717 sys 0m1.861s 00:07:58.717 10:28:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.717 10:28:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.717 ************************************ 00:07:58.717 END TEST nvmf_referrals 00:07:58.717 ************************************ 00:07:58.717 10:28:46 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:58.717 10:28:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:58.717 10:28:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.717 10:28:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.717 ************************************ 00:07:58.717 START TEST nvmf_connect_disconnect 00:07:58.717 ************************************ 00:07:58.717 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:58.717 * Looking for test storage... 00:07:58.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.717 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.718 10:28:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:00.099 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:00.099 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:00.099 Found net devices under 0000:08:00.0: cvl_0_0 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:00.099 Found net devices under 0000:08:00.1: cvl_0_1 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.099 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:08:00.100 00:08:00.100 --- 10.0.0.2 ping statistics --- 00:08:00.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.100 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:08:00.100 00:08:00.100 --- 10.0.0.1 ping statistics --- 00:08:00.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.100 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3743617 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3743617 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3743617 ']' 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.100 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.360 [2024-07-23 10:28:48.634978] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:00.360 [2024-07-23 10:28:48.635080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.360 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.360 [2024-07-23 10:28:48.699918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.360 [2024-07-23 10:28:48.787958] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.360 [2024-07-23 10:28:48.788029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.360 [2024-07-23 10:28:48.788045] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.360 [2024-07-23 10:28:48.788058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.360 [2024-07-23 10:28:48.788070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.360 [2024-07-23 10:28:48.788165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.360 [2024-07-23 10:28:48.788231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.360 [2024-07-23 10:28:48.788283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.360 [2024-07-23 10:28:48.788286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 [2024-07-23 10:28:48.930132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.619 [2024-07-23 10:28:48.984633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:00.619 10:28:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:03.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.019 rmmod nvme_tcp 00:11:51.019 rmmod nvme_fabrics 00:11:51.019 rmmod nvme_keyring 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3743617 ']' 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3743617 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3743617 ']' 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3743617 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3743617 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3743617' 00:11:51.019 killing process with pid 3743617 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3743617 00:11:51.019 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3743617 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.278 10:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.188 10:32:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.188 00:11:53.188 real 3m54.996s 00:11:53.188 user 14m55.351s 00:11:53.188 sys 0m33.803s 00:11:53.188 10:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.188 10:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.188 ************************************ 00:11:53.188 END TEST nvmf_connect_disconnect 00:11:53.188 ************************************ 00:11:53.448 10:32:41 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:53.448 10:32:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:53.448 10:32:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.448 10:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:53.448 ************************************ 00:11:53.448 START TEST nvmf_multitarget 00:11:53.448 ************************************ 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:53.448 * Looking for test storage... 00:11:53.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.448 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.449 10:32:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:55.355 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:55.355 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:55.355 Found net devices under 0000:08:00.0: cvl_0_0 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:55.355 Found net devices under 0000:08:00.1: cvl_0_1 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:55.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:11:55.355 00:11:55.355 --- 10.0.0.2 ping statistics --- 00:11:55.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.355 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:55.355 00:11:55.355 --- 10.0.0.1 ping statistics --- 00:11:55.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.355 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:55.355 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3768243 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3768243 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3768243 ']' 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.356 10:32:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:55.356 [2024-07-23 10:32:43.670977] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:11:55.356 [2024-07-23 10:32:43.671073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.356 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.356 [2024-07-23 10:32:43.751704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.356 [2024-07-23 10:32:43.857198] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.356 [2024-07-23 10:32:43.857270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.356 [2024-07-23 10:32:43.857302] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.356 [2024-07-23 10:32:43.857329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.356 [2024-07-23 10:32:43.857353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.614 [2024-07-23 10:32:43.857424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.614 [2024-07-23 10:32:43.857506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.614 [2024-07-23 10:32:43.857541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.614 [2024-07-23 10:32:43.857549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:55.614 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:55.871 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:55.871 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:55.871 "nvmf_tgt_1" 00:11:55.871 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:56.129 "nvmf_tgt_2" 00:11:56.129 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.129 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:56.129 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:56.129 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:56.387 true 00:11:56.387 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:56.387 true 00:11:56.387 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.387 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.644 rmmod nvme_tcp 00:11:56.644 rmmod nvme_fabrics 00:11:56.644 rmmod nvme_keyring 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3768243 ']' 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3768243 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3768243 ']' 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3768243 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.644 10:32:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3768243 00:11:56.644 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.644 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.644 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3768243' 00:11:56.644 killing process with pid 3768243 00:11:56.644 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3768243 00:11:56.644 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3768243 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.903 10:32:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.809 10:32:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.809 00:11:58.809 real 0m5.495s 00:11:58.809 user 0m6.899s 00:11:58.809 sys 0m1.703s 00:11:58.809 10:32:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:58.809 10:32:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:58.809 ************************************ 00:11:58.809 END TEST nvmf_multitarget 00:11:58.809 ************************************ 00:11:58.809 10:32:47 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:58.809 10:32:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:58.809 10:32:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.809 10:32:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.809 ************************************ 00:11:58.809 START TEST nvmf_rpc 00:11:58.809 ************************************ 00:11:58.809 10:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:59.067 * Looking for test storage... 00:11:59.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:59.068 10:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:00.976 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:00.976 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:00.976 Found net devices under 0000:08:00.0: cvl_0_0 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.976 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:00.977 Found net devices under 0000:08:00.1: cvl_0_1 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.977 10:32:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:12:00.977 00:12:00.977 --- 10.0.0.2 ping statistics --- 00:12:00.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.977 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:12:00.977 00:12:00.977 --- 10.0.0.1 ping statistics --- 00:12:00.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.977 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3769871 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3769871 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3769871 ']' 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 [2024-07-23 10:32:49.168622] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:00.977 [2024-07-23 10:32:49.168715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.977 [2024-07-23 10:32:49.242276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.977 [2024-07-23 10:32:49.333091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.977 [2024-07-23 10:32:49.333166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.977 [2024-07-23 10:32:49.333186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.977 [2024-07-23 10:32:49.333200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.977 [2024-07-23 10:32:49.333211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.977 [2024-07-23 10:32:49.333303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.977 [2024-07-23 10:32:49.333329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.977 [2024-07-23 10:32:49.333394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.977 [2024-07-23 10:32:49.333396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.977 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.274 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.274 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:01.274 "tick_rate": 2700000000, 00:12:01.274 "poll_groups": [ 00:12:01.274 { 00:12:01.274 "name": "nvmf_tgt_poll_group_000", 00:12:01.274 "admin_qpairs": 0, 00:12:01.274 "io_qpairs": 0, 00:12:01.274 "current_admin_qpairs": 0, 00:12:01.274 "current_io_qpairs": 0, 00:12:01.274 "pending_bdev_io": 0, 00:12:01.274 "completed_nvme_io": 0, 00:12:01.274 "transports": [] 00:12:01.274 }, 00:12:01.274 { 00:12:01.274 "name": "nvmf_tgt_poll_group_001", 00:12:01.274 "admin_qpairs": 0, 00:12:01.274 "io_qpairs": 0, 00:12:01.274 "current_admin_qpairs": 0, 00:12:01.274 "current_io_qpairs": 0, 00:12:01.274 "pending_bdev_io": 0, 00:12:01.274 "completed_nvme_io": 0, 00:12:01.274 "transports": [] 00:12:01.274 }, 00:12:01.274 { 00:12:01.275 "name": "nvmf_tgt_poll_group_002", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [] 00:12:01.275 }, 00:12:01.275 { 00:12:01.275 "name": "nvmf_tgt_poll_group_003", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [] 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 }' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 [2024-07-23 10:32:49.572448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:01.275 "tick_rate": 2700000000, 00:12:01.275 "poll_groups": [ 00:12:01.275 { 00:12:01.275 "name": "nvmf_tgt_poll_group_000", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [ 00:12:01.275 { 00:12:01.275 "trtype": "TCP" 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 }, 00:12:01.275 { 00:12:01.275 "name": "nvmf_tgt_poll_group_001", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [ 00:12:01.275 { 00:12:01.275 "trtype": "TCP" 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 }, 00:12:01.275 { 00:12:01.275 "name": "nvmf_tgt_poll_group_002", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [ 00:12:01.275 { 00:12:01.275 "trtype": "TCP" 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 }, 00:12:01.275 { 00:12:01.275 "name": "nvmf_tgt_poll_group_003", 00:12:01.275 "admin_qpairs": 0, 00:12:01.275 "io_qpairs": 0, 00:12:01.275 "current_admin_qpairs": 0, 00:12:01.275 "current_io_qpairs": 0, 00:12:01.275 "pending_bdev_io": 0, 00:12:01.275 "completed_nvme_io": 0, 00:12:01.275 "transports": [ 00:12:01.275 { 00:12:01.275 "trtype": "TCP" 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 } 00:12:01.275 ] 00:12:01.275 }' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 Malloc1 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.275 [2024-07-23 10:32:49.731163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:01.275 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:01.275 [2024-07-23 10:32:49.753548] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:12:01.535 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:01.535 could not add new controller: failed to write to nvme-fabrics device 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.535 10:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.794 10:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.794 10:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:01.794 10:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.794 10:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:01.794 10:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.330 [2024-07-23 10:32:52.365612] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:12:04.330 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:04.330 could not add new controller: failed to write to nvme-fabrics device 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.330 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.589 10:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.589 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:04.589 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.589 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:04.589 10:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.553 10:32:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.553 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.554 [2024-07-23 10:32:55.024644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.554 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.124 10:32:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.124 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:07.124 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.124 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:07.124 10:32:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.033 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 [2024-07-23 10:32:57.583271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.293 10:32:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.294 10:32:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.865 10:32:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.865 10:32:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:09.865 10:32:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.865 10:32:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:09.865 10:32:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.771 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 [2024-07-23 10:33:00.298356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.032 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.291 10:33:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.291 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:12.291 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.291 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:12.291 10:33:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.832 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.833 [2024-07-23 10:33:02.844836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.833 10:33:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.833 10:33:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.833 10:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:14.833 10:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.833 10:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:14.833 10:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 [2024-07-23 10:33:05.404507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.367 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.628 10:33:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.628 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:17.628 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.628 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:17.628 10:33:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:19.535 10:33:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.535 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.794 [2024-07-23 10:33:08.046857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.794 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 [2024-07-23 10:33:08.094934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 [2024-07-23 10:33:08.143093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 [2024-07-23 10:33:08.191254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 [2024-07-23 10:33:08.239403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.795 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:19.795 "tick_rate": 2700000000, 00:12:19.795 "poll_groups": [ 00:12:19.795 { 00:12:19.795 "name": "nvmf_tgt_poll_group_000", 00:12:19.796 "admin_qpairs": 2, 00:12:19.796 "io_qpairs": 56, 00:12:19.796 "current_admin_qpairs": 0, 00:12:19.796 "current_io_qpairs": 0, 00:12:19.796 "pending_bdev_io": 0, 00:12:19.796 "completed_nvme_io": 173, 00:12:19.796 "transports": [ 00:12:19.796 { 00:12:19.796 "trtype": "TCP" 00:12:19.796 } 00:12:19.796 ] 00:12:19.796 }, 00:12:19.796 { 00:12:19.796 "name": "nvmf_tgt_poll_group_001", 00:12:19.796 "admin_qpairs": 2, 00:12:19.796 "io_qpairs": 56, 00:12:19.796 "current_admin_qpairs": 0, 00:12:19.796 "current_io_qpairs": 0, 00:12:19.796 "pending_bdev_io": 0, 00:12:19.796 "completed_nvme_io": 157, 00:12:19.796 "transports": [ 00:12:19.796 { 00:12:19.796 "trtype": "TCP" 00:12:19.796 } 00:12:19.796 ] 00:12:19.796 }, 00:12:19.796 { 00:12:19.796 "name": "nvmf_tgt_poll_group_002", 00:12:19.796 "admin_qpairs": 1, 00:12:19.796 "io_qpairs": 56, 00:12:19.796 "current_admin_qpairs": 0, 00:12:19.796 "current_io_qpairs": 0, 00:12:19.796 "pending_bdev_io": 0, 00:12:19.796 "completed_nvme_io": 133, 00:12:19.796 "transports": [ 00:12:19.796 { 00:12:19.796 "trtype": "TCP" 00:12:19.796 } 00:12:19.796 ] 00:12:19.796 }, 00:12:19.796 { 00:12:19.796 "name": "nvmf_tgt_poll_group_003", 00:12:19.796 "admin_qpairs": 2, 00:12:19.796 "io_qpairs": 56, 00:12:19.796 "current_admin_qpairs": 0, 00:12:19.796 "current_io_qpairs": 0, 00:12:19.796 "pending_bdev_io": 0, 00:12:19.796 "completed_nvme_io": 111, 00:12:19.796 "transports": [ 00:12:19.796 { 00:12:19.796 "trtype": "TCP" 00:12:19.796 } 00:12:19.796 ] 00:12:19.796 } 00:12:19.796 ] 00:12:19.796 }' 00:12:19.796 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.796 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.796 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.796 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.054 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.054 rmmod nvme_tcp 00:12:20.054 rmmod nvme_fabrics 00:12:20.054 rmmod nvme_keyring 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3769871 ']' 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3769871 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3769871 ']' 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3769871 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3769871 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3769871' 00:12:20.055 killing process with pid 3769871 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3769871 00:12:20.055 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3769871 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.315 10:33:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.220 10:33:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.220 00:12:22.220 real 0m23.433s 00:12:22.220 user 1m16.466s 00:12:22.220 sys 0m3.685s 00:12:22.220 10:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.220 10:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.220 ************************************ 00:12:22.220 END TEST nvmf_rpc 00:12:22.220 ************************************ 00:12:22.479 10:33:10 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:22.479 10:33:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:22.479 10:33:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.479 10:33:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.479 ************************************ 00:12:22.479 START TEST nvmf_invalid 00:12:22.479 ************************************ 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:22.479 * Looking for test storage... 00:12:22.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.479 10:33:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.386 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:24.387 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:24.387 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:24.387 Found net devices under 0000:08:00.0: cvl_0_0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:24.387 Found net devices under 0000:08:00.1: cvl_0_1 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:12:24.387 00:12:24.387 --- 10.0.0.2 ping statistics --- 00:12:24.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.387 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:12:24.387 00:12:24.387 --- 10.0.0.1 ping statistics --- 00:12:24.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.387 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3773875 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3773875 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3773875 ']' 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.387 [2024-07-23 10:33:12.589661] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:24.387 [2024-07-23 10:33:12.589745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.387 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.387 [2024-07-23 10:33:12.659064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.387 [2024-07-23 10:33:12.750896] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.387 [2024-07-23 10:33:12.750963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.387 [2024-07-23 10:33:12.750978] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.387 [2024-07-23 10:33:12.750991] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.387 [2024-07-23 10:33:12.751003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.387 [2024-07-23 10:33:12.752503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.387 [2024-07-23 10:33:12.752569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.387 [2024-07-23 10:33:12.752650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.387 [2024-07-23 10:33:12.752683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.387 10:33:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.388 10:33:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:24.388 10:33:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3346 00:12:24.959 [2024-07-23 10:33:13.156778] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:24.959 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:24.959 { 00:12:24.959 "nqn": "nqn.2016-06.io.spdk:cnode3346", 00:12:24.959 "tgt_name": "foobar", 00:12:24.959 "method": "nvmf_create_subsystem", 00:12:24.959 "req_id": 1 00:12:24.959 } 00:12:24.959 Got JSON-RPC error response 00:12:24.959 response: 00:12:24.959 { 00:12:24.959 "code": -32603, 00:12:24.959 "message": "Unable to find target foobar" 00:12:24.959 }' 00:12:24.959 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:24.959 { 00:12:24.959 "nqn": "nqn.2016-06.io.spdk:cnode3346", 00:12:24.959 "tgt_name": "foobar", 00:12:24.959 "method": "nvmf_create_subsystem", 00:12:24.959 "req_id": 1 00:12:24.959 } 00:12:24.959 Got JSON-RPC error response 00:12:24.959 response: 00:12:24.959 { 00:12:24.959 "code": -32603, 00:12:24.959 "message": "Unable to find target foobar" 00:12:24.959 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:24.959 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:24.959 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode581 00:12:24.959 [2024-07-23 10:33:13.457778] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode581: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:25.217 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:25.217 { 00:12:25.217 "nqn": "nqn.2016-06.io.spdk:cnode581", 00:12:25.217 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:25.217 "method": "nvmf_create_subsystem", 00:12:25.217 "req_id": 1 00:12:25.217 } 00:12:25.217 Got JSON-RPC error response 00:12:25.217 response: 00:12:25.217 { 00:12:25.217 "code": -32602, 00:12:25.217 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:25.217 }' 00:12:25.217 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:25.217 { 00:12:25.217 "nqn": "nqn.2016-06.io.spdk:cnode581", 00:12:25.217 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:25.217 "method": "nvmf_create_subsystem", 00:12:25.217 "req_id": 1 00:12:25.217 } 00:12:25.217 Got JSON-RPC error response 00:12:25.217 response: 00:12:25.217 { 00:12:25.217 "code": -32602, 00:12:25.217 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:25.217 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:25.217 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:25.217 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24588 00:12:25.475 [2024-07-23 10:33:13.758828] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24588: invalid model number 'SPDK_Controller' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:25.475 { 00:12:25.475 "nqn": "nqn.2016-06.io.spdk:cnode24588", 00:12:25.475 "model_number": "SPDK_Controller\u001f", 00:12:25.475 "method": "nvmf_create_subsystem", 00:12:25.475 "req_id": 1 00:12:25.475 } 00:12:25.475 Got JSON-RPC error response 00:12:25.475 response: 00:12:25.475 { 00:12:25.475 "code": -32602, 00:12:25.475 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.475 }' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:25.475 { 00:12:25.475 "nqn": "nqn.2016-06.io.spdk:cnode24588", 00:12:25.475 "model_number": "SPDK_Controller\u001f", 00:12:25.475 "method": "nvmf_create_subsystem", 00:12:25.475 "req_id": 1 00:12:25.475 } 00:12:25.475 Got JSON-RPC error response 00:12:25.475 response: 00:12:25.475 { 00:12:25.475 "code": -32602, 00:12:25.475 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.475 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.475 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ']pAmGGkvrj@=$sYl0NV49' 00:12:25.476 10:33:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']pAmGGkvrj@=$sYl0NV49' nqn.2016-06.io.spdk:cnode5637 00:12:25.734 [2024-07-23 10:33:14.131958] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5637: invalid serial number ']pAmGGkvrj@=$sYl0NV49' 00:12:25.734 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:25.734 { 00:12:25.734 "nqn": "nqn.2016-06.io.spdk:cnode5637", 00:12:25.734 "serial_number": "]pAmGGkvrj@=$sYl0NV49", 00:12:25.734 "method": "nvmf_create_subsystem", 00:12:25.734 "req_id": 1 00:12:25.735 } 00:12:25.735 Got JSON-RPC error response 00:12:25.735 response: 00:12:25.735 { 00:12:25.735 "code": -32602, 00:12:25.735 "message": "Invalid SN ]pAmGGkvrj@=$sYl0NV49" 00:12:25.735 }' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:25.735 { 00:12:25.735 "nqn": "nqn.2016-06.io.spdk:cnode5637", 00:12:25.735 "serial_number": "]pAmGGkvrj@=$sYl0NV49", 00:12:25.735 "method": "nvmf_create_subsystem", 00:12:25.735 "req_id": 1 00:12:25.735 } 00:12:25.735 Got JSON-RPC error response 00:12:25.735 response: 00:12:25.735 { 00:12:25.735 "code": -32602, 00:12:25.735 "message": "Invalid SN ]pAmGGkvrj@=$sYl0NV49" 00:12:25.735 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.735 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.736 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.994 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:12:25.995 10:33:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UH(}`hLJr5-#.lWRf274u& /dev/null' 00:12:28.837 10:33:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.405 10:33:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.405 00:12:31.405 real 0m8.620s 00:12:31.405 user 0m22.294s 00:12:31.405 sys 0m2.171s 00:12:31.405 10:33:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.405 10:33:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.405 ************************************ 00:12:31.405 END TEST nvmf_invalid 00:12:31.405 ************************************ 00:12:31.405 10:33:19 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.405 10:33:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:31.405 10:33:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.405 10:33:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.405 ************************************ 00:12:31.405 START TEST nvmf_abort 00:12:31.405 ************************************ 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.406 * Looking for test storage... 00:12:31.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.406 10:33:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:32.787 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.787 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:32.787 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:32.788 Found net devices under 0000:08:00.0: cvl_0_0 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:32.788 Found net devices under 0000:08:00.1: cvl_0_1 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:32.788 00:12:32.788 --- 10.0.0.2 ping statistics --- 00:12:32.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.788 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:12:32.788 00:12:32.788 --- 10.0.0.1 ping statistics --- 00:12:32.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.788 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3775948 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3775948 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3775948 ']' 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:32.788 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.046 [2024-07-23 10:33:21.307400] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:33.046 [2024-07-23 10:33:21.307500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.046 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.046 [2024-07-23 10:33:21.372143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.046 [2024-07-23 10:33:21.459667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.046 [2024-07-23 10:33:21.459727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.046 [2024-07-23 10:33:21.459743] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.046 [2024-07-23 10:33:21.459756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.046 [2024-07-23 10:33:21.459768] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.046 [2024-07-23 10:33:21.459860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.046 [2024-07-23 10:33:21.459948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.046 [2024-07-23 10:33:21.459952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 [2024-07-23 10:33:21.590969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 Malloc0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 Delay0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 [2024-07-23 10:33:21.668032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.305 10:33:21 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:33.305 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.305 [2024-07-23 10:33:21.773322] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:35.834 Initializing NVMe Controllers 00:12:35.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:35.834 controller IO queue size 128 less than required 00:12:35.834 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:35.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:35.834 Initialization complete. Launching workers. 00:12:35.834 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29506 00:12:35.834 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29571, failed to submit 62 00:12:35.834 success 29510, unsuccess 61, failed 0 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.834 rmmod nvme_tcp 00:12:35.834 rmmod nvme_fabrics 00:12:35.834 rmmod nvme_keyring 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3775948 ']' 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3775948 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3775948 ']' 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3775948 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3775948 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3775948' 00:12:35.834 killing process with pid 3775948 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3775948 00:12:35.834 10:33:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3775948 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.834 10:33:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.738 10:33:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.738 00:12:37.738 real 0m6.714s 00:12:37.738 user 0m10.054s 00:12:37.738 sys 0m2.172s 00:12:37.738 10:33:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.738 10:33:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 ************************************ 00:12:37.738 END TEST nvmf_abort 00:12:37.738 ************************************ 00:12:37.738 10:33:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:37.738 10:33:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:37.738 10:33:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.738 10:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 ************************************ 00:12:37.738 START TEST nvmf_ns_hotplug_stress 00:12:37.738 ************************************ 00:12:37.738 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:37.738 * Looking for test storage... 00:12:37.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.997 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.998 10:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.906 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:39.907 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:39.907 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:39.907 Found net devices under 0000:08:00.0: cvl_0_0 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:39.907 Found net devices under 0000:08:00.1: cvl_0_1 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.907 10:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:12:39.907 00:12:39.907 --- 10.0.0.2 ping statistics --- 00:12:39.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.907 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:39.907 00:12:39.907 --- 10.0.0.1 ping statistics --- 00:12:39.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.907 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3777704 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3777704 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3777704 ']' 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.907 [2024-07-23 10:33:28.108733] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:39.907 [2024-07-23 10:33:28.108827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.907 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.907 [2024-07-23 10:33:28.174701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.907 [2024-07-23 10:33:28.265408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.907 [2024-07-23 10:33:28.265486] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.907 [2024-07-23 10:33:28.265504] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.907 [2024-07-23 10:33:28.265528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.907 [2024-07-23 10:33:28.265546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.907 [2024-07-23 10:33:28.265631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.907 [2024-07-23 10:33:28.265713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.907 [2024-07-23 10:33:28.265743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.907 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.908 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.908 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:39.908 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:40.166 [2024-07-23 10:33:28.658150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.424 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:40.682 10:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.940 [2024-07-23 10:33:29.253705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.940 10:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.198 10:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:41.456 Malloc0 00:12:41.456 10:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:41.714 Delay0 00:12:41.714 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.972 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:42.230 NULL1 00:12:42.230 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:42.488 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3777991 00:12:42.489 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:42.489 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:42.489 10:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.862 Read completed with error (sct=0, sc=11) 00:12:43.862 10:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:44.120 10:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:44.120 10:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:44.379 true 00:12:44.379 10:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:44.379 10:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:44.943 10:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.509 10:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:45.509 10:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:45.509 true 00:12:45.767 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:45.767 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.025 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.283 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:46.283 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:46.541 true 00:12:46.541 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:46.541 10:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.799 10:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.057 10:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:47.057 10:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:47.316 true 00:12:47.316 10:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:47.316 10:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.251 10:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.510 10:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:48.510 10:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:48.768 true 00:12:48.768 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:48.768 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.026 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.284 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:49.284 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:49.540 true 00:12:49.540 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:49.540 10:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.796 10:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.054 10:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:50.054 10:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:50.312 true 00:12:50.312 10:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:50.312 10:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 10:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.686 10:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:51.686 10:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:51.943 true 00:12:51.943 10:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:51.943 10:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.878 10:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.136 10:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:53.136 10:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:53.394 true 00:12:53.394 10:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:53.394 10:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.651 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.909 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:53.909 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:54.167 true 00:12:54.167 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:54.167 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.426 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.687 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:54.687 10:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:54.946 true 00:12:54.946 10:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:54.946 10:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.881 10:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.139 10:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:56.139 10:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:56.428 true 00:12:56.428 10:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:56.428 10:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.735 10:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.994 10:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:56.994 10:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:57.560 true 00:12:57.560 10:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:57.560 10:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.125 10:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.384 10:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:58.384 10:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:58.642 true 00:12:58.642 10:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:58.642 10:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.208 10:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.466 10:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:59.466 10:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:59.724 true 00:12:59.724 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:12:59.724 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.982 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.240 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:00.240 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:00.498 true 00:13:00.498 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:00.498 10:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.431 10:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.688 10:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:01.688 10:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:01.946 true 00:13:01.946 10:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:01.946 10:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.204 10:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.461 10:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:02.461 10:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:02.719 true 00:13:02.719 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:02.719 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.977 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.235 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:03.235 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:03.235 true 00:13:03.235 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:03.235 10:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.609 10:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.609 10:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:04.609 10:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:05.175 true 00:13:05.175 10:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:05.175 10:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.433 10:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.691 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:05.691 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:05.949 true 00:13:05.949 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:05.949 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.207 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.465 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:06.465 10:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:06.723 true 00:13:06.723 10:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:06.723 10:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.658 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.916 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:07.916 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:08.174 true 00:13:08.174 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:08.174 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.740 10:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.740 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:08.740 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:08.998 true 00:13:08.998 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:08.998 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.256 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.514 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:09.514 10:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:09.772 true 00:13:09.772 10:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:09.772 10:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.703 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.961 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:10.961 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:11.218 true 00:13:11.218 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:11.218 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.476 10:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.734 10:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:11.734 10:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:11.991 true 00:13:11.991 10:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:11.991 10:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.923 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.923 Initializing NVMe Controllers 00:13:12.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.923 Controller IO queue size 128, less than required. 00:13:12.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.923 Controller IO queue size 128, less than required. 00:13:12.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:12.923 Initialization complete. Launching workers. 00:13:12.923 ======================================================== 00:13:12.923 Latency(us) 00:13:12.923 Device Information : IOPS MiB/s Average min max 00:13:12.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 787.83 0.38 73892.01 3633.73 1015019.49 00:13:12.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8368.10 4.09 15296.36 4569.50 533620.40 00:13:12.923 ======================================================== 00:13:12.923 Total : 9155.93 4.47 20338.29 3633.73 1015019.49 00:13:12.923 00:13:13.181 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:13.181 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:13.438 true 00:13:13.438 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3777991 00:13:13.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3777991) - No such process 00:13:13.438 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3777991 00:13:13.438 10:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.696 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.954 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:13.954 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:13.954 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:13.954 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.954 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:14.212 null0 00:13:14.212 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:14.212 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:14.212 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:14.470 null1 00:13:14.728 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:14.728 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:14.728 10:34:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:14.984 null2 00:13:14.984 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:14.984 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:14.984 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:15.242 null3 00:13:15.242 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:15.242 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:15.242 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:15.500 null4 00:13:15.500 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:15.500 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:15.500 10:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:15.757 null5 00:13:15.757 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:15.757 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:15.757 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:16.015 null6 00:13:16.015 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:16.015 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:16.015 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:16.274 null7 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3781182 3781183 3781185 3781187 3781189 3781191 3781193 3781195 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.274 10:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.841 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.100 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.358 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.622 10:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.883 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.142 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.400 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.400 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.400 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.400 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.400 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.401 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.401 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.401 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.659 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.659 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.659 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.918 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.176 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:19.436 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.726 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.009 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.268 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.526 10:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.526 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.526 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.526 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.785 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.043 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.302 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.560 10:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.560 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.560 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.560 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.819 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.079 rmmod nvme_tcp 00:13:22.079 rmmod nvme_fabrics 00:13:22.079 rmmod nvme_keyring 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3777704 ']' 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3777704 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3777704 ']' 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3777704 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3777704 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3777704' 00:13:22.079 killing process with pid 3777704 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3777704 00:13:22.079 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3777704 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.339 10:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.247 10:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.507 00:13:24.507 real 0m46.563s 00:13:24.507 user 3m35.472s 00:13:24.507 sys 0m15.464s 00:13:24.507 10:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.507 10:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.507 ************************************ 00:13:24.507 END TEST nvmf_ns_hotplug_stress 00:13:24.507 ************************************ 00:13:24.507 10:34:12 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.507 10:34:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:24.507 10:34:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.507 10:34:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.507 ************************************ 00:13:24.507 START TEST nvmf_connect_stress 00:13:24.507 ************************************ 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.507 * Looking for test storage... 00:13:24.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.507 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.508 10:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:26.414 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:26.414 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.414 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:26.414 Found net devices under 0000:08:00.0: cvl_0_0 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:26.415 Found net devices under 0000:08:00.1: cvl_0_1 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:13:26.415 00:13:26.415 --- 10.0.0.2 ping statistics --- 00:13:26.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.415 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:13:26.415 00:13:26.415 --- 10.0.0.1 ping statistics --- 00:13:26.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.415 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3783346 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3783346 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3783346 ']' 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:26.415 10:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.415 [2024-07-23 10:34:14.760583] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:26.415 [2024-07-23 10:34:14.760665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.415 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.415 [2024-07-23 10:34:14.828190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.415 [2024-07-23 10:34:14.915299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.415 [2024-07-23 10:34:14.915356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.415 [2024-07-23 10:34:14.915372] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.415 [2024-07-23 10:34:14.915385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.415 [2024-07-23 10:34:14.915397] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.415 [2024-07-23 10:34:14.915491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.415 [2024-07-23 10:34:14.915562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.415 [2024-07-23 10:34:14.915593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.674 [2024-07-23 10:34:15.043170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.674 [2024-07-23 10:34:15.079653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.674 NULL1 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3783457 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:26.674 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.675 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.675 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.240 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.240 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:27.240 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.240 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.240 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.498 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.498 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:27.498 10:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.498 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.498 10:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.757 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.757 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:27.757 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.757 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.757 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.015 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.015 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:28.015 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.015 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.015 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.273 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.273 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:28.273 10:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.273 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.273 10:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.839 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.839 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:28.839 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.839 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.839 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.097 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.097 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:29.097 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.097 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.097 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.361 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:29.361 10:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.361 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.361 10:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.622 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.622 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:29.622 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.622 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.622 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.880 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.880 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:29.880 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.880 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.880 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.446 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.446 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:30.446 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.446 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.446 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.703 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.703 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:30.703 10:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.703 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.703 10:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.961 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.961 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:30.961 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.961 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.961 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.219 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.219 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:31.219 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.219 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.219 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.477 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.477 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:31.477 10:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.477 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.477 10:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.043 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.043 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:32.043 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.043 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.043 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.301 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:32.301 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.301 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.301 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.559 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.559 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:32.559 10:34:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.559 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.559 10:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.817 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.817 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:32.817 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.817 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.817 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.075 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.075 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:33.075 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.075 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.075 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.641 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.641 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:33.641 10:34:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.641 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.641 10:34:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.898 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.898 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:33.898 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.898 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.898 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.156 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.156 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:34.156 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.156 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.156 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.414 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.414 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:34.414 10:34:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.414 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.414 10:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.672 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.672 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:34.672 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.672 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.672 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.238 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.238 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:35.238 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.238 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.238 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.496 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.496 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:35.496 10:34:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.496 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.496 10:34:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.754 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.754 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:35.754 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.754 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.754 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.012 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.012 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:36.012 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.012 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.012 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.578 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.578 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:36.578 10:34:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.578 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.578 10:34:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.841 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.841 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:36.841 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.841 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.841 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.841 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.100 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3783457 00:13:37.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3783457) - No such process 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3783457 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.101 rmmod nvme_tcp 00:13:37.101 rmmod nvme_fabrics 00:13:37.101 rmmod nvme_keyring 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3783346 ']' 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3783346 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3783346 ']' 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3783346 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3783346 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3783346' 00:13:37.101 killing process with pid 3783346 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3783346 00:13:37.101 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3783346 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.361 10:34:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.271 10:34:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.271 00:13:39.271 real 0m14.903s 00:13:39.271 user 0m38.335s 00:13:39.271 sys 0m5.450s 00:13:39.271 10:34:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.271 10:34:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.271 ************************************ 00:13:39.271 END TEST nvmf_connect_stress 00:13:39.271 ************************************ 00:13:39.271 10:34:27 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.271 10:34:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:39.271 10:34:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.271 10:34:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.271 ************************************ 00:13:39.271 START TEST nvmf_fused_ordering 00:13:39.271 ************************************ 00:13:39.271 10:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.532 * Looking for test storage... 00:13:39.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.532 10:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.441 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:41.442 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:41.442 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:41.442 Found net devices under 0000:08:00.0: cvl_0_0 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:41.442 Found net devices under 0000:08:00.1: cvl_0_1 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:13:41.442 00:13:41.442 --- 10.0.0.2 ping statistics --- 00:13:41.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.442 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:41.442 00:13:41.442 --- 10.0.0.1 ping statistics --- 00:13:41.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.442 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3785873 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3785873 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3785873 ']' 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:41.442 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.442 [2024-07-23 10:34:29.718681] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:41.442 [2024-07-23 10:34:29.718774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.442 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.442 [2024-07-23 10:34:29.785539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.442 [2024-07-23 10:34:29.876012] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.442 [2024-07-23 10:34:29.876082] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.442 [2024-07-23 10:34:29.876099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.442 [2024-07-23 10:34:29.876113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.442 [2024-07-23 10:34:29.876125] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.442 [2024-07-23 10:34:29.876174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.701 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:41.701 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:41.701 10:34:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.701 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.701 10:34:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 [2024-07-23 10:34:30.009274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 [2024-07-23 10:34:30.025546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 NULL1 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.701 10:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:41.701 [2024-07-23 10:34:30.071681] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:41.701 [2024-07-23 10:34:30.071734] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785899 ] 00:13:41.701 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.268 Attached to nqn.2016-06.io.spdk:cnode1 00:13:42.268 Namespace ID: 1 size: 1GB 00:13:42.268 fused_ordering(0) 00:13:42.268 fused_ordering(1) 00:13:42.268 fused_ordering(2) 00:13:42.268 fused_ordering(3) 00:13:42.268 fused_ordering(4) 00:13:42.268 fused_ordering(5) 00:13:42.268 fused_ordering(6) 00:13:42.268 fused_ordering(7) 00:13:42.268 fused_ordering(8) 00:13:42.268 fused_ordering(9) 00:13:42.268 fused_ordering(10) 00:13:42.268 fused_ordering(11) 00:13:42.268 fused_ordering(12) 00:13:42.268 fused_ordering(13) 00:13:42.268 fused_ordering(14) 00:13:42.268 fused_ordering(15) 00:13:42.268 fused_ordering(16) 00:13:42.268 fused_ordering(17) 00:13:42.268 fused_ordering(18) 00:13:42.268 fused_ordering(19) 00:13:42.268 fused_ordering(20) 00:13:42.268 fused_ordering(21) 00:13:42.268 fused_ordering(22) 00:13:42.268 fused_ordering(23) 00:13:42.268 fused_ordering(24) 00:13:42.268 fused_ordering(25) 00:13:42.268 fused_ordering(26) 00:13:42.268 fused_ordering(27) 00:13:42.268 fused_ordering(28) 00:13:42.268 fused_ordering(29) 00:13:42.268 fused_ordering(30) 00:13:42.268 fused_ordering(31) 00:13:42.268 fused_ordering(32) 00:13:42.268 fused_ordering(33) 00:13:42.268 fused_ordering(34) 00:13:42.268 fused_ordering(35) 00:13:42.268 fused_ordering(36) 00:13:42.268 fused_ordering(37) 00:13:42.268 fused_ordering(38) 00:13:42.268 fused_ordering(39) 00:13:42.268 fused_ordering(40) 00:13:42.268 fused_ordering(41) 00:13:42.268 fused_ordering(42) 00:13:42.268 fused_ordering(43) 00:13:42.268 fused_ordering(44) 00:13:42.268 fused_ordering(45) 00:13:42.268 fused_ordering(46) 00:13:42.268 fused_ordering(47) 00:13:42.268 fused_ordering(48) 00:13:42.268 fused_ordering(49) 00:13:42.268 fused_ordering(50) 00:13:42.268 fused_ordering(51) 00:13:42.268 fused_ordering(52) 00:13:42.268 fused_ordering(53) 00:13:42.268 fused_ordering(54) 00:13:42.268 fused_ordering(55) 00:13:42.268 fused_ordering(56) 00:13:42.268 fused_ordering(57) 00:13:42.268 fused_ordering(58) 00:13:42.268 fused_ordering(59) 00:13:42.268 fused_ordering(60) 00:13:42.268 fused_ordering(61) 00:13:42.268 fused_ordering(62) 00:13:42.268 fused_ordering(63) 00:13:42.268 fused_ordering(64) 00:13:42.268 fused_ordering(65) 00:13:42.268 fused_ordering(66) 00:13:42.268 fused_ordering(67) 00:13:42.268 fused_ordering(68) 00:13:42.268 fused_ordering(69) 00:13:42.268 fused_ordering(70) 00:13:42.268 fused_ordering(71) 00:13:42.268 fused_ordering(72) 00:13:42.268 fused_ordering(73) 00:13:42.268 fused_ordering(74) 00:13:42.268 fused_ordering(75) 00:13:42.268 fused_ordering(76) 00:13:42.268 fused_ordering(77) 00:13:42.268 fused_ordering(78) 00:13:42.268 fused_ordering(79) 00:13:42.268 fused_ordering(80) 00:13:42.268 fused_ordering(81) 00:13:42.268 fused_ordering(82) 00:13:42.268 fused_ordering(83) 00:13:42.268 fused_ordering(84) 00:13:42.268 fused_ordering(85) 00:13:42.268 fused_ordering(86) 00:13:42.268 fused_ordering(87) 00:13:42.268 fused_ordering(88) 00:13:42.268 fused_ordering(89) 00:13:42.268 fused_ordering(90) 00:13:42.268 fused_ordering(91) 00:13:42.268 fused_ordering(92) 00:13:42.268 fused_ordering(93) 00:13:42.268 fused_ordering(94) 00:13:42.268 fused_ordering(95) 00:13:42.268 fused_ordering(96) 00:13:42.268 fused_ordering(97) 00:13:42.268 fused_ordering(98) 00:13:42.268 fused_ordering(99) 00:13:42.268 fused_ordering(100) 00:13:42.268 fused_ordering(101) 00:13:42.268 fused_ordering(102) 00:13:42.268 fused_ordering(103) 00:13:42.268 fused_ordering(104) 00:13:42.268 fused_ordering(105) 00:13:42.268 fused_ordering(106) 00:13:42.268 fused_ordering(107) 00:13:42.268 fused_ordering(108) 00:13:42.268 fused_ordering(109) 00:13:42.268 fused_ordering(110) 00:13:42.268 fused_ordering(111) 00:13:42.268 fused_ordering(112) 00:13:42.268 fused_ordering(113) 00:13:42.268 fused_ordering(114) 00:13:42.268 fused_ordering(115) 00:13:42.268 fused_ordering(116) 00:13:42.268 fused_ordering(117) 00:13:42.268 fused_ordering(118) 00:13:42.268 fused_ordering(119) 00:13:42.268 fused_ordering(120) 00:13:42.268 fused_ordering(121) 00:13:42.268 fused_ordering(122) 00:13:42.268 fused_ordering(123) 00:13:42.268 fused_ordering(124) 00:13:42.268 fused_ordering(125) 00:13:42.268 fused_ordering(126) 00:13:42.268 fused_ordering(127) 00:13:42.268 fused_ordering(128) 00:13:42.268 fused_ordering(129) 00:13:42.268 fused_ordering(130) 00:13:42.268 fused_ordering(131) 00:13:42.268 fused_ordering(132) 00:13:42.268 fused_ordering(133) 00:13:42.268 fused_ordering(134) 00:13:42.268 fused_ordering(135) 00:13:42.268 fused_ordering(136) 00:13:42.268 fused_ordering(137) 00:13:42.268 fused_ordering(138) 00:13:42.268 fused_ordering(139) 00:13:42.268 fused_ordering(140) 00:13:42.268 fused_ordering(141) 00:13:42.268 fused_ordering(142) 00:13:42.268 fused_ordering(143) 00:13:42.268 fused_ordering(144) 00:13:42.268 fused_ordering(145) 00:13:42.268 fused_ordering(146) 00:13:42.268 fused_ordering(147) 00:13:42.268 fused_ordering(148) 00:13:42.268 fused_ordering(149) 00:13:42.268 fused_ordering(150) 00:13:42.268 fused_ordering(151) 00:13:42.268 fused_ordering(152) 00:13:42.268 fused_ordering(153) 00:13:42.268 fused_ordering(154) 00:13:42.268 fused_ordering(155) 00:13:42.268 fused_ordering(156) 00:13:42.268 fused_ordering(157) 00:13:42.268 fused_ordering(158) 00:13:42.268 fused_ordering(159) 00:13:42.268 fused_ordering(160) 00:13:42.268 fused_ordering(161) 00:13:42.268 fused_ordering(162) 00:13:42.268 fused_ordering(163) 00:13:42.268 fused_ordering(164) 00:13:42.268 fused_ordering(165) 00:13:42.268 fused_ordering(166) 00:13:42.268 fused_ordering(167) 00:13:42.268 fused_ordering(168) 00:13:42.268 fused_ordering(169) 00:13:42.268 fused_ordering(170) 00:13:42.268 fused_ordering(171) 00:13:42.268 fused_ordering(172) 00:13:42.268 fused_ordering(173) 00:13:42.268 fused_ordering(174) 00:13:42.268 fused_ordering(175) 00:13:42.268 fused_ordering(176) 00:13:42.268 fused_ordering(177) 00:13:42.268 fused_ordering(178) 00:13:42.268 fused_ordering(179) 00:13:42.268 fused_ordering(180) 00:13:42.268 fused_ordering(181) 00:13:42.268 fused_ordering(182) 00:13:42.268 fused_ordering(183) 00:13:42.268 fused_ordering(184) 00:13:42.268 fused_ordering(185) 00:13:42.268 fused_ordering(186) 00:13:42.268 fused_ordering(187) 00:13:42.268 fused_ordering(188) 00:13:42.268 fused_ordering(189) 00:13:42.268 fused_ordering(190) 00:13:42.268 fused_ordering(191) 00:13:42.268 fused_ordering(192) 00:13:42.268 fused_ordering(193) 00:13:42.268 fused_ordering(194) 00:13:42.268 fused_ordering(195) 00:13:42.268 fused_ordering(196) 00:13:42.268 fused_ordering(197) 00:13:42.268 fused_ordering(198) 00:13:42.268 fused_ordering(199) 00:13:42.268 fused_ordering(200) 00:13:42.268 fused_ordering(201) 00:13:42.268 fused_ordering(202) 00:13:42.268 fused_ordering(203) 00:13:42.268 fused_ordering(204) 00:13:42.268 fused_ordering(205) 00:13:42.527 fused_ordering(206) 00:13:42.527 fused_ordering(207) 00:13:42.527 fused_ordering(208) 00:13:42.527 fused_ordering(209) 00:13:42.527 fused_ordering(210) 00:13:42.527 fused_ordering(211) 00:13:42.527 fused_ordering(212) 00:13:42.527 fused_ordering(213) 00:13:42.527 fused_ordering(214) 00:13:42.527 fused_ordering(215) 00:13:42.527 fused_ordering(216) 00:13:42.527 fused_ordering(217) 00:13:42.527 fused_ordering(218) 00:13:42.527 fused_ordering(219) 00:13:42.527 fused_ordering(220) 00:13:42.527 fused_ordering(221) 00:13:42.527 fused_ordering(222) 00:13:42.527 fused_ordering(223) 00:13:42.527 fused_ordering(224) 00:13:42.527 fused_ordering(225) 00:13:42.527 fused_ordering(226) 00:13:42.527 fused_ordering(227) 00:13:42.527 fused_ordering(228) 00:13:42.527 fused_ordering(229) 00:13:42.527 fused_ordering(230) 00:13:42.527 fused_ordering(231) 00:13:42.527 fused_ordering(232) 00:13:42.527 fused_ordering(233) 00:13:42.527 fused_ordering(234) 00:13:42.527 fused_ordering(235) 00:13:42.527 fused_ordering(236) 00:13:42.527 fused_ordering(237) 00:13:42.527 fused_ordering(238) 00:13:42.527 fused_ordering(239) 00:13:42.527 fused_ordering(240) 00:13:42.527 fused_ordering(241) 00:13:42.527 fused_ordering(242) 00:13:42.527 fused_ordering(243) 00:13:42.527 fused_ordering(244) 00:13:42.527 fused_ordering(245) 00:13:42.527 fused_ordering(246) 00:13:42.527 fused_ordering(247) 00:13:42.527 fused_ordering(248) 00:13:42.527 fused_ordering(249) 00:13:42.527 fused_ordering(250) 00:13:42.527 fused_ordering(251) 00:13:42.527 fused_ordering(252) 00:13:42.527 fused_ordering(253) 00:13:42.527 fused_ordering(254) 00:13:42.527 fused_ordering(255) 00:13:42.527 fused_ordering(256) 00:13:42.527 fused_ordering(257) 00:13:42.527 fused_ordering(258) 00:13:42.527 fused_ordering(259) 00:13:42.527 fused_ordering(260) 00:13:42.527 fused_ordering(261) 00:13:42.527 fused_ordering(262) 00:13:42.527 fused_ordering(263) 00:13:42.527 fused_ordering(264) 00:13:42.527 fused_ordering(265) 00:13:42.527 fused_ordering(266) 00:13:42.527 fused_ordering(267) 00:13:42.527 fused_ordering(268) 00:13:42.527 fused_ordering(269) 00:13:42.527 fused_ordering(270) 00:13:42.527 fused_ordering(271) 00:13:42.527 fused_ordering(272) 00:13:42.527 fused_ordering(273) 00:13:42.527 fused_ordering(274) 00:13:42.527 fused_ordering(275) 00:13:42.527 fused_ordering(276) 00:13:42.527 fused_ordering(277) 00:13:42.527 fused_ordering(278) 00:13:42.527 fused_ordering(279) 00:13:42.527 fused_ordering(280) 00:13:42.527 fused_ordering(281) 00:13:42.527 fused_ordering(282) 00:13:42.527 fused_ordering(283) 00:13:42.527 fused_ordering(284) 00:13:42.527 fused_ordering(285) 00:13:42.527 fused_ordering(286) 00:13:42.527 fused_ordering(287) 00:13:42.527 fused_ordering(288) 00:13:42.527 fused_ordering(289) 00:13:42.527 fused_ordering(290) 00:13:42.527 fused_ordering(291) 00:13:42.527 fused_ordering(292) 00:13:42.527 fused_ordering(293) 00:13:42.527 fused_ordering(294) 00:13:42.527 fused_ordering(295) 00:13:42.527 fused_ordering(296) 00:13:42.527 fused_ordering(297) 00:13:42.527 fused_ordering(298) 00:13:42.527 fused_ordering(299) 00:13:42.527 fused_ordering(300) 00:13:42.527 fused_ordering(301) 00:13:42.527 fused_ordering(302) 00:13:42.527 fused_ordering(303) 00:13:42.527 fused_ordering(304) 00:13:42.527 fused_ordering(305) 00:13:42.527 fused_ordering(306) 00:13:42.527 fused_ordering(307) 00:13:42.527 fused_ordering(308) 00:13:42.527 fused_ordering(309) 00:13:42.527 fused_ordering(310) 00:13:42.527 fused_ordering(311) 00:13:42.527 fused_ordering(312) 00:13:42.527 fused_ordering(313) 00:13:42.527 fused_ordering(314) 00:13:42.527 fused_ordering(315) 00:13:42.527 fused_ordering(316) 00:13:42.527 fused_ordering(317) 00:13:42.527 fused_ordering(318) 00:13:42.527 fused_ordering(319) 00:13:42.527 fused_ordering(320) 00:13:42.527 fused_ordering(321) 00:13:42.527 fused_ordering(322) 00:13:42.527 fused_ordering(323) 00:13:42.527 fused_ordering(324) 00:13:42.527 fused_ordering(325) 00:13:42.527 fused_ordering(326) 00:13:42.527 fused_ordering(327) 00:13:42.527 fused_ordering(328) 00:13:42.527 fused_ordering(329) 00:13:42.527 fused_ordering(330) 00:13:42.527 fused_ordering(331) 00:13:42.527 fused_ordering(332) 00:13:42.527 fused_ordering(333) 00:13:42.527 fused_ordering(334) 00:13:42.527 fused_ordering(335) 00:13:42.527 fused_ordering(336) 00:13:42.527 fused_ordering(337) 00:13:42.527 fused_ordering(338) 00:13:42.527 fused_ordering(339) 00:13:42.527 fused_ordering(340) 00:13:42.527 fused_ordering(341) 00:13:42.527 fused_ordering(342) 00:13:42.527 fused_ordering(343) 00:13:42.527 fused_ordering(344) 00:13:42.527 fused_ordering(345) 00:13:42.527 fused_ordering(346) 00:13:42.527 fused_ordering(347) 00:13:42.527 fused_ordering(348) 00:13:42.528 fused_ordering(349) 00:13:42.528 fused_ordering(350) 00:13:42.528 fused_ordering(351) 00:13:42.528 fused_ordering(352) 00:13:42.528 fused_ordering(353) 00:13:42.528 fused_ordering(354) 00:13:42.528 fused_ordering(355) 00:13:42.528 fused_ordering(356) 00:13:42.528 fused_ordering(357) 00:13:42.528 fused_ordering(358) 00:13:42.528 fused_ordering(359) 00:13:42.528 fused_ordering(360) 00:13:42.528 fused_ordering(361) 00:13:42.528 fused_ordering(362) 00:13:42.528 fused_ordering(363) 00:13:42.528 fused_ordering(364) 00:13:42.528 fused_ordering(365) 00:13:42.528 fused_ordering(366) 00:13:42.528 fused_ordering(367) 00:13:42.528 fused_ordering(368) 00:13:42.528 fused_ordering(369) 00:13:42.528 fused_ordering(370) 00:13:42.528 fused_ordering(371) 00:13:42.528 fused_ordering(372) 00:13:42.528 fused_ordering(373) 00:13:42.528 fused_ordering(374) 00:13:42.528 fused_ordering(375) 00:13:42.528 fused_ordering(376) 00:13:42.528 fused_ordering(377) 00:13:42.528 fused_ordering(378) 00:13:42.528 fused_ordering(379) 00:13:42.528 fused_ordering(380) 00:13:42.528 fused_ordering(381) 00:13:42.528 fused_ordering(382) 00:13:42.528 fused_ordering(383) 00:13:42.528 fused_ordering(384) 00:13:42.528 fused_ordering(385) 00:13:42.528 fused_ordering(386) 00:13:42.528 fused_ordering(387) 00:13:42.528 fused_ordering(388) 00:13:42.528 fused_ordering(389) 00:13:42.528 fused_ordering(390) 00:13:42.528 fused_ordering(391) 00:13:42.528 fused_ordering(392) 00:13:42.528 fused_ordering(393) 00:13:42.528 fused_ordering(394) 00:13:42.528 fused_ordering(395) 00:13:42.528 fused_ordering(396) 00:13:42.528 fused_ordering(397) 00:13:42.528 fused_ordering(398) 00:13:42.528 fused_ordering(399) 00:13:42.528 fused_ordering(400) 00:13:42.528 fused_ordering(401) 00:13:42.528 fused_ordering(402) 00:13:42.528 fused_ordering(403) 00:13:42.528 fused_ordering(404) 00:13:42.528 fused_ordering(405) 00:13:42.528 fused_ordering(406) 00:13:42.528 fused_ordering(407) 00:13:42.528 fused_ordering(408) 00:13:42.528 fused_ordering(409) 00:13:42.528 fused_ordering(410) 00:13:43.094 fused_ordering(411) 00:13:43.094 fused_ordering(412) 00:13:43.094 fused_ordering(413) 00:13:43.094 fused_ordering(414) 00:13:43.094 fused_ordering(415) 00:13:43.094 fused_ordering(416) 00:13:43.094 fused_ordering(417) 00:13:43.094 fused_ordering(418) 00:13:43.094 fused_ordering(419) 00:13:43.094 fused_ordering(420) 00:13:43.094 fused_ordering(421) 00:13:43.094 fused_ordering(422) 00:13:43.094 fused_ordering(423) 00:13:43.094 fused_ordering(424) 00:13:43.094 fused_ordering(425) 00:13:43.094 fused_ordering(426) 00:13:43.094 fused_ordering(427) 00:13:43.094 fused_ordering(428) 00:13:43.094 fused_ordering(429) 00:13:43.094 fused_ordering(430) 00:13:43.094 fused_ordering(431) 00:13:43.094 fused_ordering(432) 00:13:43.094 fused_ordering(433) 00:13:43.094 fused_ordering(434) 00:13:43.094 fused_ordering(435) 00:13:43.094 fused_ordering(436) 00:13:43.094 fused_ordering(437) 00:13:43.094 fused_ordering(438) 00:13:43.094 fused_ordering(439) 00:13:43.094 fused_ordering(440) 00:13:43.094 fused_ordering(441) 00:13:43.094 fused_ordering(442) 00:13:43.094 fused_ordering(443) 00:13:43.094 fused_ordering(444) 00:13:43.094 fused_ordering(445) 00:13:43.094 fused_ordering(446) 00:13:43.094 fused_ordering(447) 00:13:43.094 fused_ordering(448) 00:13:43.094 fused_ordering(449) 00:13:43.094 fused_ordering(450) 00:13:43.094 fused_ordering(451) 00:13:43.094 fused_ordering(452) 00:13:43.094 fused_ordering(453) 00:13:43.094 fused_ordering(454) 00:13:43.094 fused_ordering(455) 00:13:43.094 fused_ordering(456) 00:13:43.094 fused_ordering(457) 00:13:43.094 fused_ordering(458) 00:13:43.094 fused_ordering(459) 00:13:43.094 fused_ordering(460) 00:13:43.094 fused_ordering(461) 00:13:43.094 fused_ordering(462) 00:13:43.094 fused_ordering(463) 00:13:43.094 fused_ordering(464) 00:13:43.094 fused_ordering(465) 00:13:43.094 fused_ordering(466) 00:13:43.094 fused_ordering(467) 00:13:43.094 fused_ordering(468) 00:13:43.094 fused_ordering(469) 00:13:43.094 fused_ordering(470) 00:13:43.094 fused_ordering(471) 00:13:43.094 fused_ordering(472) 00:13:43.094 fused_ordering(473) 00:13:43.094 fused_ordering(474) 00:13:43.094 fused_ordering(475) 00:13:43.094 fused_ordering(476) 00:13:43.094 fused_ordering(477) 00:13:43.094 fused_ordering(478) 00:13:43.094 fused_ordering(479) 00:13:43.094 fused_ordering(480) 00:13:43.094 fused_ordering(481) 00:13:43.094 fused_ordering(482) 00:13:43.094 fused_ordering(483) 00:13:43.094 fused_ordering(484) 00:13:43.094 fused_ordering(485) 00:13:43.094 fused_ordering(486) 00:13:43.094 fused_ordering(487) 00:13:43.094 fused_ordering(488) 00:13:43.094 fused_ordering(489) 00:13:43.094 fused_ordering(490) 00:13:43.094 fused_ordering(491) 00:13:43.094 fused_ordering(492) 00:13:43.094 fused_ordering(493) 00:13:43.094 fused_ordering(494) 00:13:43.094 fused_ordering(495) 00:13:43.094 fused_ordering(496) 00:13:43.094 fused_ordering(497) 00:13:43.094 fused_ordering(498) 00:13:43.094 fused_ordering(499) 00:13:43.094 fused_ordering(500) 00:13:43.094 fused_ordering(501) 00:13:43.094 fused_ordering(502) 00:13:43.094 fused_ordering(503) 00:13:43.094 fused_ordering(504) 00:13:43.094 fused_ordering(505) 00:13:43.094 fused_ordering(506) 00:13:43.094 fused_ordering(507) 00:13:43.094 fused_ordering(508) 00:13:43.094 fused_ordering(509) 00:13:43.094 fused_ordering(510) 00:13:43.094 fused_ordering(511) 00:13:43.094 fused_ordering(512) 00:13:43.094 fused_ordering(513) 00:13:43.094 fused_ordering(514) 00:13:43.094 fused_ordering(515) 00:13:43.094 fused_ordering(516) 00:13:43.094 fused_ordering(517) 00:13:43.094 fused_ordering(518) 00:13:43.094 fused_ordering(519) 00:13:43.094 fused_ordering(520) 00:13:43.094 fused_ordering(521) 00:13:43.094 fused_ordering(522) 00:13:43.094 fused_ordering(523) 00:13:43.094 fused_ordering(524) 00:13:43.094 fused_ordering(525) 00:13:43.094 fused_ordering(526) 00:13:43.094 fused_ordering(527) 00:13:43.094 fused_ordering(528) 00:13:43.094 fused_ordering(529) 00:13:43.094 fused_ordering(530) 00:13:43.094 fused_ordering(531) 00:13:43.094 fused_ordering(532) 00:13:43.094 fused_ordering(533) 00:13:43.094 fused_ordering(534) 00:13:43.094 fused_ordering(535) 00:13:43.094 fused_ordering(536) 00:13:43.094 fused_ordering(537) 00:13:43.094 fused_ordering(538) 00:13:43.094 fused_ordering(539) 00:13:43.094 fused_ordering(540) 00:13:43.094 fused_ordering(541) 00:13:43.094 fused_ordering(542) 00:13:43.094 fused_ordering(543) 00:13:43.094 fused_ordering(544) 00:13:43.094 fused_ordering(545) 00:13:43.094 fused_ordering(546) 00:13:43.095 fused_ordering(547) 00:13:43.095 fused_ordering(548) 00:13:43.095 fused_ordering(549) 00:13:43.095 fused_ordering(550) 00:13:43.095 fused_ordering(551) 00:13:43.095 fused_ordering(552) 00:13:43.095 fused_ordering(553) 00:13:43.095 fused_ordering(554) 00:13:43.095 fused_ordering(555) 00:13:43.095 fused_ordering(556) 00:13:43.095 fused_ordering(557) 00:13:43.095 fused_ordering(558) 00:13:43.095 fused_ordering(559) 00:13:43.095 fused_ordering(560) 00:13:43.095 fused_ordering(561) 00:13:43.095 fused_ordering(562) 00:13:43.095 fused_ordering(563) 00:13:43.095 fused_ordering(564) 00:13:43.095 fused_ordering(565) 00:13:43.095 fused_ordering(566) 00:13:43.095 fused_ordering(567) 00:13:43.095 fused_ordering(568) 00:13:43.095 fused_ordering(569) 00:13:43.095 fused_ordering(570) 00:13:43.095 fused_ordering(571) 00:13:43.095 fused_ordering(572) 00:13:43.095 fused_ordering(573) 00:13:43.095 fused_ordering(574) 00:13:43.095 fused_ordering(575) 00:13:43.095 fused_ordering(576) 00:13:43.095 fused_ordering(577) 00:13:43.095 fused_ordering(578) 00:13:43.095 fused_ordering(579) 00:13:43.095 fused_ordering(580) 00:13:43.095 fused_ordering(581) 00:13:43.095 fused_ordering(582) 00:13:43.095 fused_ordering(583) 00:13:43.095 fused_ordering(584) 00:13:43.095 fused_ordering(585) 00:13:43.095 fused_ordering(586) 00:13:43.095 fused_ordering(587) 00:13:43.095 fused_ordering(588) 00:13:43.095 fused_ordering(589) 00:13:43.095 fused_ordering(590) 00:13:43.095 fused_ordering(591) 00:13:43.095 fused_ordering(592) 00:13:43.095 fused_ordering(593) 00:13:43.095 fused_ordering(594) 00:13:43.095 fused_ordering(595) 00:13:43.095 fused_ordering(596) 00:13:43.095 fused_ordering(597) 00:13:43.095 fused_ordering(598) 00:13:43.095 fused_ordering(599) 00:13:43.095 fused_ordering(600) 00:13:43.095 fused_ordering(601) 00:13:43.095 fused_ordering(602) 00:13:43.095 fused_ordering(603) 00:13:43.095 fused_ordering(604) 00:13:43.095 fused_ordering(605) 00:13:43.095 fused_ordering(606) 00:13:43.095 fused_ordering(607) 00:13:43.095 fused_ordering(608) 00:13:43.095 fused_ordering(609) 00:13:43.095 fused_ordering(610) 00:13:43.095 fused_ordering(611) 00:13:43.095 fused_ordering(612) 00:13:43.095 fused_ordering(613) 00:13:43.095 fused_ordering(614) 00:13:43.095 fused_ordering(615) 00:13:43.661 fused_ordering(616) 00:13:43.661 fused_ordering(617) 00:13:43.661 fused_ordering(618) 00:13:43.661 fused_ordering(619) 00:13:43.661 fused_ordering(620) 00:13:43.661 fused_ordering(621) 00:13:43.661 fused_ordering(622) 00:13:43.661 fused_ordering(623) 00:13:43.661 fused_ordering(624) 00:13:43.661 fused_ordering(625) 00:13:43.661 fused_ordering(626) 00:13:43.661 fused_ordering(627) 00:13:43.661 fused_ordering(628) 00:13:43.661 fused_ordering(629) 00:13:43.661 fused_ordering(630) 00:13:43.661 fused_ordering(631) 00:13:43.661 fused_ordering(632) 00:13:43.661 fused_ordering(633) 00:13:43.661 fused_ordering(634) 00:13:43.661 fused_ordering(635) 00:13:43.661 fused_ordering(636) 00:13:43.661 fused_ordering(637) 00:13:43.661 fused_ordering(638) 00:13:43.661 fused_ordering(639) 00:13:43.661 fused_ordering(640) 00:13:43.661 fused_ordering(641) 00:13:43.661 fused_ordering(642) 00:13:43.661 fused_ordering(643) 00:13:43.661 fused_ordering(644) 00:13:43.661 fused_ordering(645) 00:13:43.661 fused_ordering(646) 00:13:43.661 fused_ordering(647) 00:13:43.661 fused_ordering(648) 00:13:43.661 fused_ordering(649) 00:13:43.661 fused_ordering(650) 00:13:43.661 fused_ordering(651) 00:13:43.661 fused_ordering(652) 00:13:43.661 fused_ordering(653) 00:13:43.661 fused_ordering(654) 00:13:43.661 fused_ordering(655) 00:13:43.661 fused_ordering(656) 00:13:43.661 fused_ordering(657) 00:13:43.661 fused_ordering(658) 00:13:43.661 fused_ordering(659) 00:13:43.661 fused_ordering(660) 00:13:43.661 fused_ordering(661) 00:13:43.661 fused_ordering(662) 00:13:43.661 fused_ordering(663) 00:13:43.661 fused_ordering(664) 00:13:43.661 fused_ordering(665) 00:13:43.661 fused_ordering(666) 00:13:43.661 fused_ordering(667) 00:13:43.661 fused_ordering(668) 00:13:43.661 fused_ordering(669) 00:13:43.661 fused_ordering(670) 00:13:43.661 fused_ordering(671) 00:13:43.661 fused_ordering(672) 00:13:43.661 fused_ordering(673) 00:13:43.661 fused_ordering(674) 00:13:43.661 fused_ordering(675) 00:13:43.661 fused_ordering(676) 00:13:43.661 fused_ordering(677) 00:13:43.661 fused_ordering(678) 00:13:43.661 fused_ordering(679) 00:13:43.661 fused_ordering(680) 00:13:43.661 fused_ordering(681) 00:13:43.661 fused_ordering(682) 00:13:43.661 fused_ordering(683) 00:13:43.661 fused_ordering(684) 00:13:43.661 fused_ordering(685) 00:13:43.661 fused_ordering(686) 00:13:43.661 fused_ordering(687) 00:13:43.661 fused_ordering(688) 00:13:43.661 fused_ordering(689) 00:13:43.661 fused_ordering(690) 00:13:43.661 fused_ordering(691) 00:13:43.661 fused_ordering(692) 00:13:43.661 fused_ordering(693) 00:13:43.661 fused_ordering(694) 00:13:43.661 fused_ordering(695) 00:13:43.661 fused_ordering(696) 00:13:43.661 fused_ordering(697) 00:13:43.661 fused_ordering(698) 00:13:43.661 fused_ordering(699) 00:13:43.662 fused_ordering(700) 00:13:43.662 fused_ordering(701) 00:13:43.662 fused_ordering(702) 00:13:43.662 fused_ordering(703) 00:13:43.662 fused_ordering(704) 00:13:43.662 fused_ordering(705) 00:13:43.662 fused_ordering(706) 00:13:43.662 fused_ordering(707) 00:13:43.662 fused_ordering(708) 00:13:43.662 fused_ordering(709) 00:13:43.662 fused_ordering(710) 00:13:43.662 fused_ordering(711) 00:13:43.662 fused_ordering(712) 00:13:43.662 fused_ordering(713) 00:13:43.662 fused_ordering(714) 00:13:43.662 fused_ordering(715) 00:13:43.662 fused_ordering(716) 00:13:43.662 fused_ordering(717) 00:13:43.662 fused_ordering(718) 00:13:43.662 fused_ordering(719) 00:13:43.662 fused_ordering(720) 00:13:43.662 fused_ordering(721) 00:13:43.662 fused_ordering(722) 00:13:43.662 fused_ordering(723) 00:13:43.662 fused_ordering(724) 00:13:43.662 fused_ordering(725) 00:13:43.662 fused_ordering(726) 00:13:43.662 fused_ordering(727) 00:13:43.662 fused_ordering(728) 00:13:43.662 fused_ordering(729) 00:13:43.662 fused_ordering(730) 00:13:43.662 fused_ordering(731) 00:13:43.662 fused_ordering(732) 00:13:43.662 fused_ordering(733) 00:13:43.662 fused_ordering(734) 00:13:43.662 fused_ordering(735) 00:13:43.662 fused_ordering(736) 00:13:43.662 fused_ordering(737) 00:13:43.662 fused_ordering(738) 00:13:43.662 fused_ordering(739) 00:13:43.662 fused_ordering(740) 00:13:43.662 fused_ordering(741) 00:13:43.662 fused_ordering(742) 00:13:43.662 fused_ordering(743) 00:13:43.662 fused_ordering(744) 00:13:43.662 fused_ordering(745) 00:13:43.662 fused_ordering(746) 00:13:43.662 fused_ordering(747) 00:13:43.662 fused_ordering(748) 00:13:43.662 fused_ordering(749) 00:13:43.662 fused_ordering(750) 00:13:43.662 fused_ordering(751) 00:13:43.662 fused_ordering(752) 00:13:43.662 fused_ordering(753) 00:13:43.662 fused_ordering(754) 00:13:43.662 fused_ordering(755) 00:13:43.662 fused_ordering(756) 00:13:43.662 fused_ordering(757) 00:13:43.662 fused_ordering(758) 00:13:43.662 fused_ordering(759) 00:13:43.662 fused_ordering(760) 00:13:43.662 fused_ordering(761) 00:13:43.662 fused_ordering(762) 00:13:43.662 fused_ordering(763) 00:13:43.662 fused_ordering(764) 00:13:43.662 fused_ordering(765) 00:13:43.662 fused_ordering(766) 00:13:43.662 fused_ordering(767) 00:13:43.662 fused_ordering(768) 00:13:43.662 fused_ordering(769) 00:13:43.662 fused_ordering(770) 00:13:43.662 fused_ordering(771) 00:13:43.662 fused_ordering(772) 00:13:43.662 fused_ordering(773) 00:13:43.662 fused_ordering(774) 00:13:43.662 fused_ordering(775) 00:13:43.662 fused_ordering(776) 00:13:43.662 fused_ordering(777) 00:13:43.662 fused_ordering(778) 00:13:43.662 fused_ordering(779) 00:13:43.662 fused_ordering(780) 00:13:43.662 fused_ordering(781) 00:13:43.662 fused_ordering(782) 00:13:43.662 fused_ordering(783) 00:13:43.662 fused_ordering(784) 00:13:43.662 fused_ordering(785) 00:13:43.662 fused_ordering(786) 00:13:43.662 fused_ordering(787) 00:13:43.662 fused_ordering(788) 00:13:43.662 fused_ordering(789) 00:13:43.662 fused_ordering(790) 00:13:43.662 fused_ordering(791) 00:13:43.662 fused_ordering(792) 00:13:43.662 fused_ordering(793) 00:13:43.662 fused_ordering(794) 00:13:43.662 fused_ordering(795) 00:13:43.662 fused_ordering(796) 00:13:43.662 fused_ordering(797) 00:13:43.662 fused_ordering(798) 00:13:43.662 fused_ordering(799) 00:13:43.662 fused_ordering(800) 00:13:43.662 fused_ordering(801) 00:13:43.662 fused_ordering(802) 00:13:43.662 fused_ordering(803) 00:13:43.662 fused_ordering(804) 00:13:43.662 fused_ordering(805) 00:13:43.662 fused_ordering(806) 00:13:43.662 fused_ordering(807) 00:13:43.662 fused_ordering(808) 00:13:43.662 fused_ordering(809) 00:13:43.662 fused_ordering(810) 00:13:43.662 fused_ordering(811) 00:13:43.662 fused_ordering(812) 00:13:43.662 fused_ordering(813) 00:13:43.662 fused_ordering(814) 00:13:43.662 fused_ordering(815) 00:13:43.662 fused_ordering(816) 00:13:43.662 fused_ordering(817) 00:13:43.662 fused_ordering(818) 00:13:43.662 fused_ordering(819) 00:13:43.662 fused_ordering(820) 00:13:44.249 fused_ordering(821) 00:13:44.249 fused_ordering(822) 00:13:44.249 fused_ordering(823) 00:13:44.249 fused_ordering(824) 00:13:44.249 fused_ordering(825) 00:13:44.249 fused_ordering(826) 00:13:44.249 fused_ordering(827) 00:13:44.249 fused_ordering(828) 00:13:44.249 fused_ordering(829) 00:13:44.249 fused_ordering(830) 00:13:44.249 fused_ordering(831) 00:13:44.249 fused_ordering(832) 00:13:44.249 fused_ordering(833) 00:13:44.249 fused_ordering(834) 00:13:44.249 fused_ordering(835) 00:13:44.249 fused_ordering(836) 00:13:44.249 fused_ordering(837) 00:13:44.249 fused_ordering(838) 00:13:44.249 fused_ordering(839) 00:13:44.249 fused_ordering(840) 00:13:44.249 fused_ordering(841) 00:13:44.249 fused_ordering(842) 00:13:44.249 fused_ordering(843) 00:13:44.249 fused_ordering(844) 00:13:44.249 fused_ordering(845) 00:13:44.249 fused_ordering(846) 00:13:44.249 fused_ordering(847) 00:13:44.249 fused_ordering(848) 00:13:44.249 fused_ordering(849) 00:13:44.249 fused_ordering(850) 00:13:44.249 fused_ordering(851) 00:13:44.249 fused_ordering(852) 00:13:44.249 fused_ordering(853) 00:13:44.249 fused_ordering(854) 00:13:44.249 fused_ordering(855) 00:13:44.249 fused_ordering(856) 00:13:44.249 fused_ordering(857) 00:13:44.249 fused_ordering(858) 00:13:44.249 fused_ordering(859) 00:13:44.249 fused_ordering(860) 00:13:44.249 fused_ordering(861) 00:13:44.249 fused_ordering(862) 00:13:44.249 fused_ordering(863) 00:13:44.249 fused_ordering(864) 00:13:44.249 fused_ordering(865) 00:13:44.249 fused_ordering(866) 00:13:44.249 fused_ordering(867) 00:13:44.249 fused_ordering(868) 00:13:44.249 fused_ordering(869) 00:13:44.249 fused_ordering(870) 00:13:44.249 fused_ordering(871) 00:13:44.249 fused_ordering(872) 00:13:44.249 fused_ordering(873) 00:13:44.249 fused_ordering(874) 00:13:44.249 fused_ordering(875) 00:13:44.249 fused_ordering(876) 00:13:44.249 fused_ordering(877) 00:13:44.249 fused_ordering(878) 00:13:44.249 fused_ordering(879) 00:13:44.249 fused_ordering(880) 00:13:44.249 fused_ordering(881) 00:13:44.249 fused_ordering(882) 00:13:44.249 fused_ordering(883) 00:13:44.249 fused_ordering(884) 00:13:44.249 fused_ordering(885) 00:13:44.249 fused_ordering(886) 00:13:44.249 fused_ordering(887) 00:13:44.249 fused_ordering(888) 00:13:44.249 fused_ordering(889) 00:13:44.249 fused_ordering(890) 00:13:44.249 fused_ordering(891) 00:13:44.249 fused_ordering(892) 00:13:44.249 fused_ordering(893) 00:13:44.249 fused_ordering(894) 00:13:44.249 fused_ordering(895) 00:13:44.249 fused_ordering(896) 00:13:44.249 fused_ordering(897) 00:13:44.249 fused_ordering(898) 00:13:44.249 fused_ordering(899) 00:13:44.249 fused_ordering(900) 00:13:44.249 fused_ordering(901) 00:13:44.249 fused_ordering(902) 00:13:44.249 fused_ordering(903) 00:13:44.249 fused_ordering(904) 00:13:44.249 fused_ordering(905) 00:13:44.249 fused_ordering(906) 00:13:44.249 fused_ordering(907) 00:13:44.249 fused_ordering(908) 00:13:44.249 fused_ordering(909) 00:13:44.249 fused_ordering(910) 00:13:44.249 fused_ordering(911) 00:13:44.249 fused_ordering(912) 00:13:44.249 fused_ordering(913) 00:13:44.249 fused_ordering(914) 00:13:44.249 fused_ordering(915) 00:13:44.249 fused_ordering(916) 00:13:44.249 fused_ordering(917) 00:13:44.249 fused_ordering(918) 00:13:44.249 fused_ordering(919) 00:13:44.249 fused_ordering(920) 00:13:44.249 fused_ordering(921) 00:13:44.249 fused_ordering(922) 00:13:44.249 fused_ordering(923) 00:13:44.249 fused_ordering(924) 00:13:44.249 fused_ordering(925) 00:13:44.249 fused_ordering(926) 00:13:44.249 fused_ordering(927) 00:13:44.249 fused_ordering(928) 00:13:44.249 fused_ordering(929) 00:13:44.249 fused_ordering(930) 00:13:44.249 fused_ordering(931) 00:13:44.249 fused_ordering(932) 00:13:44.249 fused_ordering(933) 00:13:44.249 fused_ordering(934) 00:13:44.249 fused_ordering(935) 00:13:44.249 fused_ordering(936) 00:13:44.249 fused_ordering(937) 00:13:44.249 fused_ordering(938) 00:13:44.249 fused_ordering(939) 00:13:44.249 fused_ordering(940) 00:13:44.249 fused_ordering(941) 00:13:44.249 fused_ordering(942) 00:13:44.249 fused_ordering(943) 00:13:44.249 fused_ordering(944) 00:13:44.249 fused_ordering(945) 00:13:44.249 fused_ordering(946) 00:13:44.249 fused_ordering(947) 00:13:44.249 fused_ordering(948) 00:13:44.249 fused_ordering(949) 00:13:44.249 fused_ordering(950) 00:13:44.249 fused_ordering(951) 00:13:44.249 fused_ordering(952) 00:13:44.249 fused_ordering(953) 00:13:44.249 fused_ordering(954) 00:13:44.249 fused_ordering(955) 00:13:44.249 fused_ordering(956) 00:13:44.249 fused_ordering(957) 00:13:44.249 fused_ordering(958) 00:13:44.249 fused_ordering(959) 00:13:44.249 fused_ordering(960) 00:13:44.249 fused_ordering(961) 00:13:44.249 fused_ordering(962) 00:13:44.249 fused_ordering(963) 00:13:44.249 fused_ordering(964) 00:13:44.249 fused_ordering(965) 00:13:44.249 fused_ordering(966) 00:13:44.249 fused_ordering(967) 00:13:44.249 fused_ordering(968) 00:13:44.249 fused_ordering(969) 00:13:44.249 fused_ordering(970) 00:13:44.249 fused_ordering(971) 00:13:44.249 fused_ordering(972) 00:13:44.249 fused_ordering(973) 00:13:44.249 fused_ordering(974) 00:13:44.249 fused_ordering(975) 00:13:44.249 fused_ordering(976) 00:13:44.249 fused_ordering(977) 00:13:44.249 fused_ordering(978) 00:13:44.249 fused_ordering(979) 00:13:44.249 fused_ordering(980) 00:13:44.249 fused_ordering(981) 00:13:44.249 fused_ordering(982) 00:13:44.249 fused_ordering(983) 00:13:44.249 fused_ordering(984) 00:13:44.249 fused_ordering(985) 00:13:44.249 fused_ordering(986) 00:13:44.249 fused_ordering(987) 00:13:44.249 fused_ordering(988) 00:13:44.249 fused_ordering(989) 00:13:44.249 fused_ordering(990) 00:13:44.249 fused_ordering(991) 00:13:44.249 fused_ordering(992) 00:13:44.249 fused_ordering(993) 00:13:44.249 fused_ordering(994) 00:13:44.249 fused_ordering(995) 00:13:44.249 fused_ordering(996) 00:13:44.249 fused_ordering(997) 00:13:44.249 fused_ordering(998) 00:13:44.249 fused_ordering(999) 00:13:44.249 fused_ordering(1000) 00:13:44.249 fused_ordering(1001) 00:13:44.249 fused_ordering(1002) 00:13:44.249 fused_ordering(1003) 00:13:44.249 fused_ordering(1004) 00:13:44.249 fused_ordering(1005) 00:13:44.249 fused_ordering(1006) 00:13:44.249 fused_ordering(1007) 00:13:44.249 fused_ordering(1008) 00:13:44.249 fused_ordering(1009) 00:13:44.249 fused_ordering(1010) 00:13:44.249 fused_ordering(1011) 00:13:44.249 fused_ordering(1012) 00:13:44.249 fused_ordering(1013) 00:13:44.249 fused_ordering(1014) 00:13:44.249 fused_ordering(1015) 00:13:44.249 fused_ordering(1016) 00:13:44.249 fused_ordering(1017) 00:13:44.249 fused_ordering(1018) 00:13:44.249 fused_ordering(1019) 00:13:44.249 fused_ordering(1020) 00:13:44.249 fused_ordering(1021) 00:13:44.249 fused_ordering(1022) 00:13:44.249 fused_ordering(1023) 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.249 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:44.250 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.250 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.250 rmmod nvme_tcp 00:13:44.515 rmmod nvme_fabrics 00:13:44.515 rmmod nvme_keyring 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3785873 ']' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3785873 ']' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3785873' 00:13:44.515 killing process with pid 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3785873 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.515 10:34:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.047 10:34:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.047 00:13:47.047 real 0m7.261s 00:13:47.047 user 0m5.335s 00:13:47.047 sys 0m2.900s 00:13:47.047 10:34:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:47.047 10:34:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.047 ************************************ 00:13:47.047 END TEST nvmf_fused_ordering 00:13:47.047 ************************************ 00:13:47.047 10:34:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:47.047 10:34:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:47.047 10:34:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.047 10:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.047 ************************************ 00:13:47.047 START TEST nvmf_delete_subsystem 00:13:47.047 ************************************ 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:47.047 * Looking for test storage... 00:13:47.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.047 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.048 10:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:48.425 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:48.425 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.425 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:48.426 Found net devices under 0000:08:00.0: cvl_0_0 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:48.426 Found net devices under 0000:08:00.1: cvl_0_1 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:13:48.426 00:13:48.426 --- 10.0.0.2 ping statistics --- 00:13:48.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.426 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:48.426 00:13:48.426 --- 10.0.0.1 ping statistics --- 00:13:48.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.426 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3787642 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3787642 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3787642 ']' 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.426 10:34:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.684 [2024-07-23 10:34:36.928715] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:48.684 [2024-07-23 10:34:36.928811] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.684 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.684 [2024-07-23 10:34:37.008563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.684 [2024-07-23 10:34:37.113387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.684 [2024-07-23 10:34:37.113462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.684 [2024-07-23 10:34:37.113502] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.684 [2024-07-23 10:34:37.113530] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.684 [2024-07-23 10:34:37.113553] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.684 [2024-07-23 10:34:37.113652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.684 [2024-07-23 10:34:37.113666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 [2024-07-23 10:34:37.319234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 [2024-07-23 10:34:37.335433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 NULL1 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.942 Delay0 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.942 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3787715 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:48.943 10:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:48.943 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.943 [2024-07-23 10:34:37.410114] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:51.469 10:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.469 10:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.469 10:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 [2024-07-23 10:34:39.532308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a090 is same with the state(5) to be set 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 starting I/O failed: -6 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 [2024-07-23 10:34:39.533238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5234000c00 is same with the state(5) to be set 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Write completed with error (sct=0, sc=8) 00:13:51.469 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Read completed with error (sct=0, sc=8) 00:13:51.470 Write completed with error (sct=0, sc=8) 00:13:52.036 [2024-07-23 10:34:40.507040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d7c0 is same with the state(5) to be set 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 [2024-07-23 10:34:40.536872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f523400bfe0 is same with the state(5) to be set 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 [2024-07-23 10:34:40.537258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a270 is same with the state(5) to be set 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 [2024-07-23 10:34:40.537445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f523400c600 is same with the state(5) to be set 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 Read completed with error (sct=0, sc=8) 00:13:52.036 Write completed with error (sct=0, sc=8) 00:13:52.036 [2024-07-23 10:34:40.537651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a9b0 is same with the state(5) to be set 00:13:52.036 Initializing NVMe Controllers 00:13:52.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.036 Controller IO queue size 128, less than required. 00:13:52.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:52.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:52.036 Initialization complete. Launching workers. 00:13:52.036 ======================================================== 00:13:52.036 Latency(us) 00:13:52.036 Device Information : IOPS MiB/s Average min max 00:13:52.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.75 0.08 913662.48 641.06 2002773.80 00:13:52.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.28 0.08 935045.56 410.67 2004828.63 00:13:52.036 ======================================================== 00:13:52.036 Total : 323.03 0.16 924206.21 410.67 2004828.63 00:13:52.036 00:13:52.036 [2024-07-23 10:34:40.538680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5d7c0 (9): Bad file descriptor 00:13:52.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:52.294 10:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.294 10:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:52.294 10:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3787715 00:13:52.294 10:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3787715 00:13:52.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3787715) - No such process 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3787715 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3787715 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3787715 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.552 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.810 [2024-07-23 10:34:41.061054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3788023 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:52.810 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:52.810 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.811 [2024-07-23 10:34:41.121992] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:53.374 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:53.374 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:53.374 10:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:53.632 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:53.632 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:53.632 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:54.196 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:54.196 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:54.196 10:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:54.760 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:54.760 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:54.760 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.324 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:55.324 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:55.324 10:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.890 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:55.890 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:55.890 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.890 Initializing NVMe Controllers 00:13:55.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.890 Controller IO queue size 128, less than required. 00:13:55.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:55.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:55.890 Initialization complete. Launching workers. 00:13:55.890 ======================================================== 00:13:55.890 Latency(us) 00:13:55.890 Device Information : IOPS MiB/s Average min max 00:13:55.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003913.78 1000230.74 1043036.65 00:13:55.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004867.18 1000214.16 1040545.58 00:13:55.890 ======================================================== 00:13:55.890 Total : 256.00 0.12 1004390.48 1000214.16 1043036.65 00:13:55.890 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3788023 00:13:56.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3788023) - No such process 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3788023 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.148 rmmod nvme_tcp 00:13:56.148 rmmod nvme_fabrics 00:13:56.148 rmmod nvme_keyring 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3787642 ']' 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3787642 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3787642 ']' 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3787642 00:13:56.148 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:13:56.149 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.149 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787642 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787642' 00:13:56.407 killing process with pid 3787642 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3787642 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3787642 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.407 10:34:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.945 10:34:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.945 00:13:58.945 real 0m11.806s 00:13:58.945 user 0m27.600s 00:13:58.945 sys 0m2.688s 00:13:58.945 10:34:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.945 10:34:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.945 ************************************ 00:13:58.945 END TEST nvmf_delete_subsystem 00:13:58.945 ************************************ 00:13:58.945 10:34:46 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:58.945 10:34:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:58.945 10:34:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.945 10:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.945 ************************************ 00:13:58.945 START TEST nvmf_ns_masking 00:13:58.945 ************************************ 00:13:58.945 10:34:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:58.945 * Looking for test storage... 00:13:58.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.945 10:34:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.945 10:34:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=d5247fe8-0461-41c2-8025-8b7f5bc2f553 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.945 10:34:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:00.335 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:00.335 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:00.335 Found net devices under 0000:08:00.0: cvl_0_0 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:00.335 Found net devices under 0000:08:00.1: cvl_0_1 00:14:00.335 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:00.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:14:00.336 00:14:00.336 --- 10.0.0.2 ping statistics --- 00:14:00.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.336 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:00.336 00:14:00.336 --- 10.0.0.1 ping statistics --- 00:14:00.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.336 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3789826 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3789826 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3789826 ']' 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:00.336 10:34:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.336 [2024-07-23 10:34:48.785342] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:00.336 [2024-07-23 10:34:48.785438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.336 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.595 [2024-07-23 10:34:48.852115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.595 [2024-07-23 10:34:48.943239] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.595 [2024-07-23 10:34:48.943306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.595 [2024-07-23 10:34:48.943322] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.595 [2024-07-23 10:34:48.943335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.595 [2024-07-23 10:34:48.943347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.595 [2024-07-23 10:34:48.943435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.595 [2024-07-23 10:34:48.943460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.595 [2024-07-23 10:34:48.943510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.595 [2024-07-23 10:34:48.943514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.595 10:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.853 [2024-07-23 10:34:49.355073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.111 10:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:01.111 10:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:01.111 10:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.369 Malloc1 00:14:01.369 10:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.627 Malloc2 00:14:01.627 10:34:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.885 10:34:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:02.142 10:34:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.400 [2024-07-23 10:34:50.855685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.400 10:34:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:02.400 10:34:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d5247fe8-0461-41c2-8025-8b7f5bc2f553 -a 10.0.0.2 -s 4420 -i 4 00:14:02.658 10:34:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.658 10:34:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:02.658 10:34:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.658 10:34:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:02.658 10:34:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:05.187 [ 0]:0x1 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b6004748a90c4a428065fbbf3757b0f8 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b6004748a90c4a428065fbbf3757b0f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:05.187 [ 0]:0x1 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b6004748a90c4a428065fbbf3757b0f8 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b6004748a90c4a428065fbbf3757b0f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:05.187 [ 1]:0x2 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:05.187 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.188 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.445 10:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d5247fe8-0461-41c2-8025-8b7f5bc2f553 -a 10.0.0.2 -s 4420 -i 4 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:06.011 10:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:08.538 [ 0]:0x2 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.538 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:08.538 [ 0]:0x1 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b6004748a90c4a428065fbbf3757b0f8 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b6004748a90c4a428065fbbf3757b0f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:08.539 [ 1]:0x2 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.539 10:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.798 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:09.057 [ 0]:0x2 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.057 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.314 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:09.314 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d5247fe8-0461-41c2-8025-8b7f5bc2f553 -a 10.0.0.2 -s 4420 -i 4 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:09.572 10:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:11.471 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:11.729 [ 0]:0x1 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.729 10:34:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b6004748a90c4a428065fbbf3757b0f8 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b6004748a90c4a428065fbbf3757b0f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:11.729 [ 1]:0x2 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.729 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:11.988 [ 0]:0x2 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.988 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:12.246 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.505 [2024-07-23 10:35:00.782829] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:12.505 request: 00:14:12.505 { 00:14:12.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.505 "nsid": 2, 00:14:12.505 "host": "nqn.2016-06.io.spdk:host1", 00:14:12.505 "method": "nvmf_ns_remove_host", 00:14:12.505 "req_id": 1 00:14:12.505 } 00:14:12.505 Got JSON-RPC error response 00:14:12.505 response: 00:14:12.505 { 00:14:12.505 "code": -32602, 00:14:12.505 "message": "Invalid parameters" 00:14:12.505 } 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:12.505 [ 0]:0x2 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d8463f7fa7a74445ba5ed2a32241a8cd 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d8463f7fa7a74445ba5ed2a32241a8cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.505 10:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.789 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.789 rmmod nvme_tcp 00:14:13.050 rmmod nvme_fabrics 00:14:13.050 rmmod nvme_keyring 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3789826 ']' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3789826 ']' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789826' 00:14:13.050 killing process with pid 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3789826 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.050 10:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.593 10:35:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.593 00:14:15.593 real 0m16.643s 00:14:15.593 user 0m53.964s 00:14:15.593 sys 0m3.571s 00:14:15.593 10:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.593 10:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.593 ************************************ 00:14:15.593 END TEST nvmf_ns_masking 00:14:15.593 ************************************ 00:14:15.593 10:35:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:15.593 10:35:03 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.593 10:35:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:15.593 10:35:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.593 10:35:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.593 ************************************ 00:14:15.593 START TEST nvmf_nvme_cli 00:14:15.593 ************************************ 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.593 * Looking for test storage... 00:14:15.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.593 10:35:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:17.025 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:17.025 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:17.025 Found net devices under 0000:08:00.0: cvl_0_0 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:17.025 Found net devices under 0000:08:00.1: cvl_0_1 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.025 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:14:17.026 00:14:17.026 --- 10.0.0.2 ping statistics --- 00:14:17.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.026 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:17.026 00:14:17.026 --- 10.0.0.1 ping statistics --- 00:14:17.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.026 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3792596 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3792596 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3792596 ']' 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.026 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.026 [2024-07-23 10:35:05.494333] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:17.026 [2024-07-23 10:35:05.494433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.285 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.285 [2024-07-23 10:35:05.560946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.285 [2024-07-23 10:35:05.652010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.285 [2024-07-23 10:35:05.652073] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.285 [2024-07-23 10:35:05.652089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.285 [2024-07-23 10:35:05.652111] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.285 [2024-07-23 10:35:05.652123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.285 [2024-07-23 10:35:05.652201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.285 [2024-07-23 10:35:05.652258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.285 [2024-07-23 10:35:05.652308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.285 [2024-07-23 10:35:05.652310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.285 10:35:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 [2024-07-23 10:35:05.792113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 Malloc0 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 Malloc1 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 [2024-07-23 10:35:05.871721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.544 10:35:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:14:17.544 00:14:17.544 Discovery Log Number of Records 2, Generation counter 2 00:14:17.544 =====Discovery Log Entry 0====== 00:14:17.544 trtype: tcp 00:14:17.544 adrfam: ipv4 00:14:17.544 subtype: current discovery subsystem 00:14:17.544 treq: not required 00:14:17.545 portid: 0 00:14:17.545 trsvcid: 4420 00:14:17.545 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:17.545 traddr: 10.0.0.2 00:14:17.545 eflags: explicit discovery connections, duplicate discovery information 00:14:17.545 sectype: none 00:14:17.545 =====Discovery Log Entry 1====== 00:14:17.545 trtype: tcp 00:14:17.545 adrfam: ipv4 00:14:17.545 subtype: nvme subsystem 00:14:17.545 treq: not required 00:14:17.545 portid: 0 00:14:17.545 trsvcid: 4420 00:14:17.545 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:17.545 traddr: 10.0.0.2 00:14:17.545 eflags: none 00:14:17.545 sectype: none 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:17.545 10:35:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:18.115 10:35:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:20.654 /dev/nvme0n1 ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:20.654 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.655 rmmod nvme_tcp 00:14:20.655 rmmod nvme_fabrics 00:14:20.655 rmmod nvme_keyring 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3792596 ']' 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3792596 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3792596 ']' 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3792596 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3792596 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3792596' 00:14:20.655 killing process with pid 3792596 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3792596 00:14:20.655 10:35:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3792596 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.655 10:35:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.199 10:35:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.199 00:14:23.199 real 0m7.525s 00:14:23.199 user 0m14.117s 00:14:23.199 sys 0m1.940s 00:14:23.199 10:35:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.199 10:35:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.199 ************************************ 00:14:23.199 END TEST nvmf_nvme_cli 00:14:23.199 ************************************ 00:14:23.199 10:35:11 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:23.199 10:35:11 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:23.199 10:35:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:23.199 10:35:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.199 10:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.199 ************************************ 00:14:23.199 START TEST nvmf_vfio_user 00:14:23.199 ************************************ 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:23.199 * Looking for test storage... 00:14:23.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3793314 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3793314' 00:14:23.199 Process pid: 3793314 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3793314 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3793314 ']' 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:23.199 [2024-07-23 10:35:11.331911] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:23.199 [2024-07-23 10:35:11.332014] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.199 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.199 [2024-07-23 10:35:11.393088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.199 [2024-07-23 10:35:11.481212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.199 [2024-07-23 10:35:11.481273] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.199 [2024-07-23 10:35:11.481289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.199 [2024-07-23 10:35:11.481302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.199 [2024-07-23 10:35:11.481314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.199 [2024-07-23 10:35:11.481391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.199 [2024-07-23 10:35:11.481445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.199 [2024-07-23 10:35:11.481501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.199 [2024-07-23 10:35:11.481504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:23.199 10:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:24.136 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:24.396 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:24.396 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:24.654 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.654 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:24.654 10:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:24.912 Malloc1 00:14:24.912 10:35:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:25.171 10:35:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:25.430 10:35:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:25.688 10:35:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.688 10:35:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:25.688 10:35:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:25.945 Malloc2 00:14:25.945 10:35:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:26.510 10:35:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:26.510 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:27.081 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:27.081 [2024-07-23 10:35:15.315106] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:27.081 [2024-07-23 10:35:15.315158] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793646 ] 00:14:27.081 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.081 [2024-07-23 10:35:15.357394] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:27.081 [2024-07-23 10:35:15.364990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.081 [2024-07-23 10:35:15.365023] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f21daad5000 00:14:27.081 [2024-07-23 10:35:15.365986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.366971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.367974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.368988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.369989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.370990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.371995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.373005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.081 [2024-07-23 10:35:15.374026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.081 [2024-07-23 10:35:15.374048] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f21d988b000 00:14:27.081 [2024-07-23 10:35:15.375613] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.081 [2024-07-23 10:35:15.395821] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:27.081 [2024-07-23 10:35:15.395862] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:27.081 [2024-07-23 10:35:15.401164] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:27.081 [2024-07-23 10:35:15.401225] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:27.081 [2024-07-23 10:35:15.401327] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:27.081 [2024-07-23 10:35:15.401357] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:27.081 [2024-07-23 10:35:15.401368] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:27.081 [2024-07-23 10:35:15.402159] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:27.081 [2024-07-23 10:35:15.402183] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:27.081 [2024-07-23 10:35:15.402199] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:27.081 [2024-07-23 10:35:15.403160] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:27.081 [2024-07-23 10:35:15.403179] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:27.081 [2024-07-23 10:35:15.403194] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.404170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:27.081 [2024-07-23 10:35:15.404199] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.405170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:27.081 [2024-07-23 10:35:15.405190] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:27.081 [2024-07-23 10:35:15.405201] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.405213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.405325] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:27.081 [2024-07-23 10:35:15.405334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.405344] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:27.081 [2024-07-23 10:35:15.406195] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:27.081 [2024-07-23 10:35:15.407186] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:27.081 [2024-07-23 10:35:15.408196] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:27.081 [2024-07-23 10:35:15.409189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:27.081 [2024-07-23 10:35:15.409304] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.081 [2024-07-23 10:35:15.410204] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:27.081 [2024-07-23 10:35:15.410223] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.081 [2024-07-23 10:35:15.410234] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:27.081 [2024-07-23 10:35:15.410262] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:27.081 [2024-07-23 10:35:15.410277] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.081 [2024-07-23 10:35:15.410306] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.081 [2024-07-23 10:35:15.410316] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.081 [2024-07-23 10:35:15.410336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.081 [2024-07-23 10:35:15.410405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:27.081 [2024-07-23 10:35:15.410429] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:27.081 [2024-07-23 10:35:15.410439] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:27.081 [2024-07-23 10:35:15.410448] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:27.081 [2024-07-23 10:35:15.410457] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:27.082 [2024-07-23 10:35:15.410466] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:27.082 [2024-07-23 10:35:15.410486] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:27.082 [2024-07-23 10:35:15.410497] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410512] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.410550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.410570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.082 [2024-07-23 10:35:15.410585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.082 [2024-07-23 10:35:15.410599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.082 [2024-07-23 10:35:15.410613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.082 [2024-07-23 10:35:15.410622] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410639] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.410668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.410680] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:27.082 [2024-07-23 10:35:15.410689] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410702] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.410823] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410842] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410858] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:27.082 [2024-07-23 10:35:15.410867] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:27.082 [2024-07-23 10:35:15.410879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.410897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.410916] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:27.082 [2024-07-23 10:35:15.410934] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410949] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.410963] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.082 [2024-07-23 10:35:15.410973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.082 [2024-07-23 10:35:15.410984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411035] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411065] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.082 [2024-07-23 10:35:15.411074] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.082 [2024-07-23 10:35:15.411085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411117] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411130] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411145] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411158] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411168] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411178] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.082 [2024-07-23 10:35:15.411186] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:27.082 [2024-07-23 10:35:15.411197] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:27.082 [2024-07-23 10:35:15.411228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:27.082 [2024-07-23 10:35:15.411380] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:27.082 [2024-07-23 10:35:15.411387] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:27.082 [2024-07-23 10:35:15.411395] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:27.082 [2024-07-23 10:35:15.411405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:27.082 [2024-07-23 10:35:15.411418] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:27.082 [2024-07-23 10:35:15.411427] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:27.082 [2024-07-23 10:35:15.411437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411450] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:27.082 [2024-07-23 10:35:15.411458] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.082 [2024-07-23 10:35:15.411469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411491] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:27.082 [2024-07-23 10:35:15.411501] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:27.082 [2024-07-23 10:35:15.411511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:27.082 [2024-07-23 10:35:15.411525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:27.082 [2024-07-23 10:35:15.411582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:27.082 ===================================================== 00:14:27.082 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:27.082 ===================================================== 00:14:27.082 Controller Capabilities/Features 00:14:27.082 ================================ 00:14:27.082 Vendor ID: 4e58 00:14:27.082 Subsystem Vendor ID: 4e58 00:14:27.082 Serial Number: SPDK1 00:14:27.082 Model Number: SPDK bdev Controller 00:14:27.082 Firmware Version: 24.05.1 00:14:27.083 Recommended Arb Burst: 6 00:14:27.083 IEEE OUI Identifier: 8d 6b 50 00:14:27.083 Multi-path I/O 00:14:27.083 May have multiple subsystem ports: Yes 00:14:27.083 May have multiple controllers: Yes 00:14:27.083 Associated with SR-IOV VF: No 00:14:27.083 Max Data Transfer Size: 131072 00:14:27.083 Max Number of Namespaces: 32 00:14:27.083 Max Number of I/O Queues: 127 00:14:27.083 NVMe Specification Version (VS): 1.3 00:14:27.083 NVMe Specification Version (Identify): 1.3 00:14:27.083 Maximum Queue Entries: 256 00:14:27.083 Contiguous Queues Required: Yes 00:14:27.083 Arbitration Mechanisms Supported 00:14:27.083 Weighted Round Robin: Not Supported 00:14:27.083 Vendor Specific: Not Supported 00:14:27.083 Reset Timeout: 15000 ms 00:14:27.083 Doorbell Stride: 4 bytes 00:14:27.083 NVM Subsystem Reset: Not Supported 00:14:27.083 Command Sets Supported 00:14:27.083 NVM Command Set: Supported 00:14:27.083 Boot Partition: Not Supported 00:14:27.083 Memory Page Size Minimum: 4096 bytes 00:14:27.083 Memory Page Size Maximum: 4096 bytes 00:14:27.083 Persistent Memory Region: Not Supported 00:14:27.083 Optional Asynchronous Events Supported 00:14:27.083 Namespace Attribute Notices: Supported 00:14:27.083 Firmware Activation Notices: Not Supported 00:14:27.083 ANA Change Notices: Not Supported 00:14:27.083 PLE Aggregate Log Change Notices: Not Supported 00:14:27.083 LBA Status Info Alert Notices: Not Supported 00:14:27.083 EGE Aggregate Log Change Notices: Not Supported 00:14:27.083 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.083 Zone Descriptor Change Notices: Not Supported 00:14:27.083 Discovery Log Change Notices: Not Supported 00:14:27.083 Controller Attributes 00:14:27.083 128-bit Host Identifier: Supported 00:14:27.083 Non-Operational Permissive Mode: Not Supported 00:14:27.083 NVM Sets: Not Supported 00:14:27.083 Read Recovery Levels: Not Supported 00:14:27.083 Endurance Groups: Not Supported 00:14:27.083 Predictable Latency Mode: Not Supported 00:14:27.083 Traffic Based Keep ALive: Not Supported 00:14:27.083 Namespace Granularity: Not Supported 00:14:27.083 SQ Associations: Not Supported 00:14:27.083 UUID List: Not Supported 00:14:27.083 Multi-Domain Subsystem: Not Supported 00:14:27.083 Fixed Capacity Management: Not Supported 00:14:27.083 Variable Capacity Management: Not Supported 00:14:27.083 Delete Endurance Group: Not Supported 00:14:27.083 Delete NVM Set: Not Supported 00:14:27.083 Extended LBA Formats Supported: Not Supported 00:14:27.083 Flexible Data Placement Supported: Not Supported 00:14:27.083 00:14:27.083 Controller Memory Buffer Support 00:14:27.083 ================================ 00:14:27.083 Supported: No 00:14:27.083 00:14:27.083 Persistent Memory Region Support 00:14:27.083 ================================ 00:14:27.083 Supported: No 00:14:27.083 00:14:27.083 Admin Command Set Attributes 00:14:27.083 ============================ 00:14:27.083 Security Send/Receive: Not Supported 00:14:27.083 Format NVM: Not Supported 00:14:27.083 Firmware Activate/Download: Not Supported 00:14:27.083 Namespace Management: Not Supported 00:14:27.083 Device Self-Test: Not Supported 00:14:27.083 Directives: Not Supported 00:14:27.083 NVMe-MI: Not Supported 00:14:27.083 Virtualization Management: Not Supported 00:14:27.083 Doorbell Buffer Config: Not Supported 00:14:27.083 Get LBA Status Capability: Not Supported 00:14:27.083 Command & Feature Lockdown Capability: Not Supported 00:14:27.083 Abort Command Limit: 4 00:14:27.083 Async Event Request Limit: 4 00:14:27.083 Number of Firmware Slots: N/A 00:14:27.083 Firmware Slot 1 Read-Only: N/A 00:14:27.083 Firmware Activation Without Reset: N/A 00:14:27.083 Multiple Update Detection Support: N/A 00:14:27.083 Firmware Update Granularity: No Information Provided 00:14:27.083 Per-Namespace SMART Log: No 00:14:27.083 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.083 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:27.083 Command Effects Log Page: Supported 00:14:27.083 Get Log Page Extended Data: Supported 00:14:27.083 Telemetry Log Pages: Not Supported 00:14:27.083 Persistent Event Log Pages: Not Supported 00:14:27.083 Supported Log Pages Log Page: May Support 00:14:27.083 Commands Supported & Effects Log Page: Not Supported 00:14:27.083 Feature Identifiers & Effects Log Page:May Support 00:14:27.083 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.083 Data Area 4 for Telemetry Log: Not Supported 00:14:27.083 Error Log Page Entries Supported: 128 00:14:27.083 Keep Alive: Supported 00:14:27.083 Keep Alive Granularity: 10000 ms 00:14:27.083 00:14:27.083 NVM Command Set Attributes 00:14:27.083 ========================== 00:14:27.083 Submission Queue Entry Size 00:14:27.083 Max: 64 00:14:27.083 Min: 64 00:14:27.083 Completion Queue Entry Size 00:14:27.083 Max: 16 00:14:27.083 Min: 16 00:14:27.083 Number of Namespaces: 32 00:14:27.083 Compare Command: Supported 00:14:27.083 Write Uncorrectable Command: Not Supported 00:14:27.083 Dataset Management Command: Supported 00:14:27.083 Write Zeroes Command: Supported 00:14:27.083 Set Features Save Field: Not Supported 00:14:27.083 Reservations: Not Supported 00:14:27.083 Timestamp: Not Supported 00:14:27.083 Copy: Supported 00:14:27.083 Volatile Write Cache: Present 00:14:27.083 Atomic Write Unit (Normal): 1 00:14:27.083 Atomic Write Unit (PFail): 1 00:14:27.083 Atomic Compare & Write Unit: 1 00:14:27.083 Fused Compare & Write: Supported 00:14:27.083 Scatter-Gather List 00:14:27.083 SGL Command Set: Supported (Dword aligned) 00:14:27.083 SGL Keyed: Not Supported 00:14:27.083 SGL Bit Bucket Descriptor: Not Supported 00:14:27.083 SGL Metadata Pointer: Not Supported 00:14:27.083 Oversized SGL: Not Supported 00:14:27.083 SGL Metadata Address: Not Supported 00:14:27.083 SGL Offset: Not Supported 00:14:27.083 Transport SGL Data Block: Not Supported 00:14:27.083 Replay Protected Memory Block: Not Supported 00:14:27.083 00:14:27.083 Firmware Slot Information 00:14:27.083 ========================= 00:14:27.083 Active slot: 1 00:14:27.083 Slot 1 Firmware Revision: 24.05.1 00:14:27.083 00:14:27.083 00:14:27.083 Commands Supported and Effects 00:14:27.083 ============================== 00:14:27.083 Admin Commands 00:14:27.083 -------------- 00:14:27.083 Get Log Page (02h): Supported 00:14:27.083 Identify (06h): Supported 00:14:27.083 Abort (08h): Supported 00:14:27.083 Set Features (09h): Supported 00:14:27.083 Get Features (0Ah): Supported 00:14:27.083 Asynchronous Event Request (0Ch): Supported 00:14:27.083 Keep Alive (18h): Supported 00:14:27.083 I/O Commands 00:14:27.083 ------------ 00:14:27.083 Flush (00h): Supported LBA-Change 00:14:27.083 Write (01h): Supported LBA-Change 00:14:27.083 Read (02h): Supported 00:14:27.083 Compare (05h): Supported 00:14:27.083 Write Zeroes (08h): Supported LBA-Change 00:14:27.083 Dataset Management (09h): Supported LBA-Change 00:14:27.083 Copy (19h): Supported LBA-Change 00:14:27.083 Unknown (79h): Supported LBA-Change 00:14:27.083 Unknown (7Ah): Supported 00:14:27.083 00:14:27.083 Error Log 00:14:27.083 ========= 00:14:27.083 00:14:27.083 Arbitration 00:14:27.083 =========== 00:14:27.083 Arbitration Burst: 1 00:14:27.083 00:14:27.083 Power Management 00:14:27.083 ================ 00:14:27.083 Number of Power States: 1 00:14:27.083 Current Power State: Power State #0 00:14:27.083 Power State #0: 00:14:27.083 Max Power: 0.00 W 00:14:27.083 Non-Operational State: Operational 00:14:27.083 Entry Latency: Not Reported 00:14:27.083 Exit Latency: Not Reported 00:14:27.083 Relative Read Throughput: 0 00:14:27.083 Relative Read Latency: 0 00:14:27.083 Relative Write Throughput: 0 00:14:27.083 Relative Write Latency: 0 00:14:27.083 Idle Power: Not Reported 00:14:27.083 Active Power: Not Reported 00:14:27.083 Non-Operational Permissive Mode: Not Supported 00:14:27.083 00:14:27.083 Health Information 00:14:27.083 ================== 00:14:27.083 Critical Warnings: 00:14:27.083 Available Spare Space: OK 00:14:27.083 Temperature: OK 00:14:27.083 Device Reliability: OK 00:14:27.083 Read Only: No 00:14:27.083 Volatile Memory Backup: OK 00:14:27.083 Current Temperature: 0 Kelvin[2024-07-23 10:35:15.411739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:27.084 [2024-07-23 10:35:15.411757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:27.084 [2024-07-23 10:35:15.411798] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:27.084 [2024-07-23 10:35:15.411817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.084 [2024-07-23 10:35:15.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.084 [2024-07-23 10:35:15.411840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.084 [2024-07-23 10:35:15.411856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.084 [2024-07-23 10:35:15.415492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:27.084 [2024-07-23 10:35:15.415515] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:27.084 [2024-07-23 10:35:15.416246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.084 [2024-07-23 10:35:15.416330] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:27.084 [2024-07-23 10:35:15.416345] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:27.084 [2024-07-23 10:35:15.417246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:27.084 [2024-07-23 10:35:15.417271] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:27.084 [2024-07-23 10:35:15.417348] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:27.084 [2024-07-23 10:35:15.419289] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.084 (-273 Celsius) 00:14:27.084 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.084 Available Spare: 0% 00:14:27.084 Available Spare Threshold: 0% 00:14:27.084 Life Percentage Used: 0% 00:14:27.084 Data Units Read: 0 00:14:27.084 Data Units Written: 0 00:14:27.084 Host Read Commands: 0 00:14:27.084 Host Write Commands: 0 00:14:27.084 Controller Busy Time: 0 minutes 00:14:27.084 Power Cycles: 0 00:14:27.084 Power On Hours: 0 hours 00:14:27.084 Unsafe Shutdowns: 0 00:14:27.084 Unrecoverable Media Errors: 0 00:14:27.084 Lifetime Error Log Entries: 0 00:14:27.084 Warning Temperature Time: 0 minutes 00:14:27.084 Critical Temperature Time: 0 minutes 00:14:27.084 00:14:27.084 Number of Queues 00:14:27.084 ================ 00:14:27.084 Number of I/O Submission Queues: 127 00:14:27.084 Number of I/O Completion Queues: 127 00:14:27.084 00:14:27.084 Active Namespaces 00:14:27.084 ================= 00:14:27.084 Namespace ID:1 00:14:27.084 Error Recovery Timeout: Unlimited 00:14:27.084 Command Set Identifier: NVM (00h) 00:14:27.084 Deallocate: Supported 00:14:27.084 Deallocated/Unwritten Error: Not Supported 00:14:27.084 Deallocated Read Value: Unknown 00:14:27.084 Deallocate in Write Zeroes: Not Supported 00:14:27.084 Deallocated Guard Field: 0xFFFF 00:14:27.084 Flush: Supported 00:14:27.084 Reservation: Supported 00:14:27.084 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.084 Size (in LBAs): 131072 (0GiB) 00:14:27.084 Capacity (in LBAs): 131072 (0GiB) 00:14:27.084 Utilization (in LBAs): 131072 (0GiB) 00:14:27.084 NGUID: 6C80574F54C042E8B0354789303FBEA2 00:14:27.084 UUID: 6c80574f-54c0-42e8-b035-4789303fbea2 00:14:27.084 Thin Provisioning: Not Supported 00:14:27.084 Per-NS Atomic Units: Yes 00:14:27.084 Atomic Boundary Size (Normal): 0 00:14:27.084 Atomic Boundary Size (PFail): 0 00:14:27.084 Atomic Boundary Offset: 0 00:14:27.084 Maximum Single Source Range Length: 65535 00:14:27.084 Maximum Copy Length: 65535 00:14:27.084 Maximum Source Range Count: 1 00:14:27.084 NGUID/EUI64 Never Reused: No 00:14:27.084 Namespace Write Protected: No 00:14:27.084 Number of LBA Formats: 1 00:14:27.084 Current LBA Format: LBA Format #00 00:14:27.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.084 00:14:27.084 10:35:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:27.084 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.343 [2024-07-23 10:35:15.640328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.619 Initializing NVMe Controllers 00:14:32.619 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:32.619 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:32.619 Initialization complete. Launching workers. 00:14:32.619 ======================================================== 00:14:32.619 Latency(us) 00:14:32.619 Device Information : IOPS MiB/s Average min max 00:14:32.619 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24134.61 94.28 5303.49 1468.73 10570.19 00:14:32.619 ======================================================== 00:14:32.619 Total : 24134.61 94.28 5303.49 1468.73 10570.19 00:14:32.619 00:14:32.619 [2024-07-23 10:35:20.662083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.619 10:35:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:32.619 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.619 [2024-07-23 10:35:20.889231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.893 Initializing NVMe Controllers 00:14:37.893 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:37.893 Initialization complete. Launching workers. 00:14:37.893 ======================================================== 00:14:37.893 Latency(us) 00:14:37.893 Device Information : IOPS MiB/s Average min max 00:14:37.893 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16016.77 62.57 7990.82 7327.89 14975.30 00:14:37.893 ======================================================== 00:14:37.893 Total : 16016.77 62.57 7990.82 7327.89 14975.30 00:14:37.893 00:14:37.893 [2024-07-23 10:35:25.922550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.893 10:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:37.893 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.893 [2024-07-23 10:35:26.142733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.169 [2024-07-23 10:35:31.215779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.170 Initializing NVMe Controllers 00:14:43.170 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:43.170 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:43.170 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:43.170 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:43.170 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:43.170 Initialization complete. Launching workers. 00:14:43.170 Starting thread on core 2 00:14:43.170 Starting thread on core 3 00:14:43.170 Starting thread on core 1 00:14:43.170 10:35:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:43.170 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.170 [2024-07-23 10:35:31.505970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.456 [2024-07-23 10:35:34.566200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.456 Initializing NVMe Controllers 00:14:46.456 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.456 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.456 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:46.456 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:46.456 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:46.456 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:46.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:46.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:46.456 Initialization complete. Launching workers. 00:14:46.456 Starting thread on core 1 with urgent priority queue 00:14:46.456 Starting thread on core 2 with urgent priority queue 00:14:46.456 Starting thread on core 3 with urgent priority queue 00:14:46.456 Starting thread on core 0 with urgent priority queue 00:14:46.456 SPDK bdev Controller (SPDK1 ) core 0: 7750.67 IO/s 12.90 secs/100000 ios 00:14:46.456 SPDK bdev Controller (SPDK1 ) core 1: 7333.00 IO/s 13.64 secs/100000 ios 00:14:46.456 SPDK bdev Controller (SPDK1 ) core 2: 7785.00 IO/s 12.85 secs/100000 ios 00:14:46.456 SPDK bdev Controller (SPDK1 ) core 3: 8198.67 IO/s 12.20 secs/100000 ios 00:14:46.456 ======================================================== 00:14:46.456 00:14:46.456 10:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:46.456 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.456 [2024-07-23 10:35:34.839577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.456 Initializing NVMe Controllers 00:14:46.456 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.456 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.456 Namespace ID: 1 size: 0GB 00:14:46.456 Initialization complete. 00:14:46.456 INFO: using host memory buffer for IO 00:14:46.456 Hello world! 00:14:46.456 [2024-07-23 10:35:34.874220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.456 10:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:46.714 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.714 [2024-07-23 10:35:35.146976] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.095 Initializing NVMe Controllers 00:14:48.095 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.095 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.095 Initialization complete. Launching workers. 00:14:48.095 submit (in ns) avg, min, max = 10263.4, 4474.1, 4013650.4 00:14:48.095 complete (in ns) avg, min, max = 27649.0, 2650.4, 4015965.9 00:14:48.095 00:14:48.095 Submit histogram 00:14:48.095 ================ 00:14:48.095 Range in us Cumulative Count 00:14:48.095 4.456 - 4.480: 0.0171% ( 2) 00:14:48.095 4.480 - 4.504: 0.1110% ( 11) 00:14:48.095 4.504 - 4.527: 0.6317% ( 61) 00:14:48.095 4.527 - 4.551: 2.5096% ( 220) 00:14:48.095 4.551 - 4.575: 6.1886% ( 431) 00:14:48.095 4.575 - 4.599: 10.2518% ( 476) 00:14:48.095 4.599 - 4.622: 13.2992% ( 357) 00:14:48.095 4.622 - 4.646: 15.2454% ( 228) 00:14:48.095 4.646 - 4.670: 16.2100% ( 113) 00:14:48.095 4.670 - 4.693: 16.9185% ( 83) 00:14:48.095 4.693 - 4.717: 18.6001% ( 197) 00:14:48.095 4.717 - 4.741: 22.7998% ( 492) 00:14:48.095 4.741 - 4.764: 30.4140% ( 892) 00:14:48.095 4.764 - 4.788: 35.6808% ( 617) 00:14:48.095 4.788 - 4.812: 39.8890% ( 493) 00:14:48.095 4.812 - 4.836: 41.3402% ( 170) 00:14:48.095 4.836 - 4.859: 42.0743% ( 86) 00:14:48.095 4.859 - 4.883: 42.7145% ( 75) 00:14:48.095 4.883 - 4.907: 43.9010% ( 139) 00:14:48.095 4.907 - 4.930: 45.5484% ( 193) 00:14:48.095 4.930 - 4.954: 47.8361% ( 268) 00:14:48.095 4.954 - 4.978: 49.1507% ( 154) 00:14:48.095 4.978 - 5.001: 50.4055% ( 147) 00:14:48.095 5.001 - 5.025: 51.1993% ( 93) 00:14:48.095 5.025 - 5.049: 51.7115% ( 60) 00:14:48.095 5.049 - 5.073: 51.9334% ( 26) 00:14:48.095 5.073 - 5.096: 52.0615% ( 15) 00:14:48.095 5.096 - 5.120: 52.2578% ( 23) 00:14:48.095 5.120 - 5.144: 53.5723% ( 154) 00:14:48.095 5.144 - 5.167: 57.2685% ( 433) 00:14:48.095 5.167 - 5.191: 61.6645% ( 515) 00:14:48.095 5.191 - 5.215: 64.0461% ( 279) 00:14:48.095 5.215 - 5.239: 65.7618% ( 201) 00:14:48.095 5.239 - 5.262: 66.6581% ( 105) 00:14:48.095 5.262 - 5.286: 67.2727% ( 72) 00:14:48.095 5.286 - 5.310: 69.3043% ( 238) 00:14:48.095 5.310 - 5.333: 72.8724% ( 418) 00:14:48.095 5.333 - 5.357: 75.0149% ( 251) 00:14:48.095 5.357 - 5.381: 76.2356% ( 143) 00:14:48.095 5.381 - 5.404: 77.4819% ( 146) 00:14:48.095 5.404 - 5.428: 78.4720% ( 116) 00:14:48.095 5.428 - 5.452: 79.1293% ( 77) 00:14:48.095 5.452 - 5.476: 79.3342% ( 24) 00:14:48.095 5.476 - 5.499: 79.4793% ( 17) 00:14:48.095 5.499 - 5.523: 79.6244% ( 17) 00:14:48.095 5.523 - 5.547: 83.7985% ( 489) 00:14:48.095 5.547 - 5.570: 88.3653% ( 535) 00:14:48.095 5.570 - 5.594: 91.7883% ( 401) 00:14:48.095 5.594 - 5.618: 93.4699% ( 197) 00:14:48.095 5.618 - 5.641: 94.2296% ( 89) 00:14:48.095 5.641 - 5.665: 94.5881% ( 42) 00:14:48.095 5.665 - 5.689: 94.8271% ( 28) 00:14:48.095 5.689 - 5.713: 94.9466% ( 14) 00:14:48.095 5.713 - 5.736: 95.0747% ( 15) 00:14:48.095 5.736 - 5.760: 95.1430% ( 8) 00:14:48.095 5.760 - 5.784: 95.2710% ( 15) 00:14:48.095 5.784 - 5.807: 95.3905% ( 14) 00:14:48.095 5.807 - 5.831: 95.5783% ( 22) 00:14:48.095 5.831 - 5.855: 95.6381% ( 7) 00:14:48.095 5.855 - 5.879: 95.7576% ( 14) 00:14:48.095 5.879 - 5.902: 95.7917% ( 4) 00:14:48.095 5.902 - 5.926: 95.8856% ( 11) 00:14:48.095 5.926 - 5.950: 96.0051% ( 14) 00:14:48.095 5.950 - 5.973: 96.0393% ( 4) 00:14:48.095 5.973 - 5.997: 96.1161% ( 9) 00:14:48.095 5.997 - 6.021: 96.1758% ( 7) 00:14:48.095 6.021 - 6.044: 96.1929% ( 2) 00:14:48.095 6.044 - 6.068: 96.3295% ( 16) 00:14:48.095 6.068 - 6.116: 96.9953% ( 78) 00:14:48.095 6.116 - 6.163: 97.4563% ( 54) 00:14:48.095 6.163 - 6.210: 97.5843% ( 15) 00:14:48.095 6.210 - 6.258: 98.0708% ( 57) 00:14:48.095 6.258 - 6.305: 98.2672% ( 23) 00:14:48.095 6.305 - 6.353: 98.3781% ( 13) 00:14:48.095 6.353 - 6.400: 98.4038% ( 3) 00:14:48.095 6.400 - 6.447: 98.4379% ( 4) 00:14:48.095 6.447 - 6.495: 98.4977% ( 7) 00:14:48.095 6.495 - 6.542: 98.5318% ( 4) 00:14:48.095 6.542 - 6.590: 98.5574% ( 3) 00:14:48.095 6.590 - 6.637: 98.5659% ( 1) 00:14:48.095 6.637 - 6.684: 98.6086% ( 5) 00:14:48.095 6.684 - 6.732: 98.6428% ( 4) 00:14:48.095 6.732 - 6.779: 98.6598% ( 2) 00:14:48.096 6.827 - 6.874: 98.6854% ( 3) 00:14:48.096 6.874 - 6.921: 98.7111% ( 3) 00:14:48.096 6.921 - 6.969: 98.7452% ( 4) 00:14:48.096 6.969 - 7.016: 98.7708% ( 3) 00:14:48.096 7.016 - 7.064: 98.7793% ( 1) 00:14:48.096 7.064 - 7.111: 98.7879% ( 1) 00:14:48.096 7.159 - 7.206: 98.7964% ( 1) 00:14:48.096 7.396 - 7.443: 98.8050% ( 1) 00:14:48.096 7.490 - 7.538: 98.8135% ( 1) 00:14:48.096 7.585 - 7.633: 98.8220% ( 1) 00:14:48.096 7.680 - 7.727: 98.8306% ( 1) 00:14:48.096 7.870 - 7.917: 98.8391% ( 1) 00:14:48.096 8.059 - 8.107: 98.8562% ( 2) 00:14:48.096 8.107 - 8.154: 98.8647% ( 1) 00:14:48.096 8.154 - 8.201: 98.8732% ( 1) 00:14:48.096 8.201 - 8.249: 98.8818% ( 1) 00:14:48.096 8.249 - 8.296: 98.8903% ( 1) 00:14:48.096 8.391 - 8.439: 98.8988% ( 1) 00:14:48.096 8.439 - 8.486: 98.9159% ( 2) 00:14:48.096 8.533 - 8.581: 98.9330% ( 2) 00:14:48.096 8.628 - 8.676: 98.9415% ( 1) 00:14:48.096 8.676 - 8.723: 98.9501% ( 1) 00:14:48.096 8.770 - 8.818: 98.9586% ( 1) 00:14:48.096 8.865 - 8.913: 98.9757% ( 2) 00:14:48.096 8.960 - 9.007: 98.9927% ( 2) 00:14:48.096 9.007 - 9.055: 99.0013% ( 1) 00:14:48.096 9.055 - 9.102: 99.0269% ( 3) 00:14:48.096 9.102 - 9.150: 99.0440% ( 2) 00:14:48.096 9.244 - 9.292: 99.0781% ( 4) 00:14:48.096 9.481 - 9.529: 99.0866% ( 1) 00:14:48.096 9.529 - 9.576: 99.1037% ( 2) 00:14:48.096 9.576 - 9.624: 99.1122% ( 1) 00:14:48.096 9.624 - 9.671: 99.1208% ( 1) 00:14:48.096 9.719 - 9.766: 99.1293% ( 1) 00:14:48.096 9.813 - 9.861: 99.1549% ( 3) 00:14:48.096 9.956 - 10.003: 99.1635% ( 1) 00:14:48.096 10.003 - 10.050: 99.1720% ( 1) 00:14:48.096 10.050 - 10.098: 99.1805% ( 1) 00:14:48.096 10.145 - 10.193: 99.1891% ( 1) 00:14:48.096 10.287 - 10.335: 99.1976% ( 1) 00:14:48.096 10.335 - 10.382: 99.2232% ( 3) 00:14:48.096 10.382 - 10.430: 99.2318% ( 1) 00:14:48.096 10.430 - 10.477: 99.2574% ( 3) 00:14:48.096 10.477 - 10.524: 99.2744% ( 2) 00:14:48.096 10.572 - 10.619: 99.2830% ( 1) 00:14:48.096 10.714 - 10.761: 99.2915% ( 1) 00:14:48.096 10.904 - 10.951: 99.3000% ( 1) 00:14:48.096 10.951 - 10.999: 99.3086% ( 1) 00:14:48.096 10.999 - 11.046: 99.3171% ( 1) 00:14:48.096 11.093 - 11.141: 99.3257% ( 1) 00:14:48.096 11.141 - 11.188: 99.3427% ( 2) 00:14:48.096 11.188 - 11.236: 99.3598% ( 2) 00:14:48.096 11.236 - 11.283: 99.3769% ( 2) 00:14:48.096 11.425 - 11.473: 99.3854% ( 1) 00:14:48.096 11.567 - 11.615: 99.4025% ( 2) 00:14:48.096 11.710 - 11.757: 99.4110% ( 1) 00:14:48.096 11.947 - 11.994: 99.4195% ( 1) 00:14:48.096 12.041 - 12.089: 99.4281% ( 1) 00:14:48.096 12.136 - 12.231: 99.4452% ( 2) 00:14:48.096 12.421 - 12.516: 99.4537% ( 1) 00:14:48.096 12.516 - 12.610: 99.4622% ( 1) 00:14:48.096 12.705 - 12.800: 99.4708% ( 1) 00:14:48.096 12.800 - 12.895: 99.4878% ( 2) 00:14:48.096 12.990 - 13.084: 99.4964% ( 1) 00:14:48.096 13.084 - 13.179: 99.5134% ( 2) 00:14:48.096 13.274 - 13.369: 99.5220% ( 1) 00:14:48.096 13.369 - 13.464: 99.5305% ( 1) 00:14:48.096 13.653 - 13.748: 99.5732% ( 5) 00:14:48.096 13.748 - 13.843: 99.6842% ( 13) 00:14:48.096 13.843 - 13.938: 99.7098% ( 3) 00:14:48.096 13.938 - 14.033: 99.7268% ( 2) 00:14:48.096 14.127 - 14.222: 99.7439% ( 2) 00:14:48.096 14.222 - 14.317: 99.7610% ( 2) 00:14:48.096 14.317 - 14.412: 99.7866% ( 3) 00:14:48.096 14.412 - 14.507: 99.7951% ( 1) 00:14:48.096 14.601 - 14.696: 99.8037% ( 1) 00:14:48.096 15.076 - 15.170: 99.8122% ( 1) 00:14:48.096 16.119 - 16.213: 99.8207% ( 1) 00:14:48.096 16.687 - 16.782: 99.8293% ( 1) 00:14:48.096 16.782 - 16.877: 99.8378% ( 1) 00:14:48.096 18.584 - 18.679: 99.8464% ( 1) 00:14:48.096 18.679 - 18.773: 99.8634% ( 2) 00:14:48.096 19.058 - 19.153: 99.8720% ( 1) 00:14:48.096 3980.705 - 4004.978: 99.9232% ( 6) 00:14:48.096 4004.978 - 4029.250: 100.0000% ( 9) 00:14:48.096 00:14:48.096 Complete histogram 00:14:48.096 ================== 00:14:48.096 Range in us Cumulative Count 00:14:48.096 2.643 - 2.655: 0.0939% ( 11) 00:14:48.096 2.655 - 2.667: 9.3385% ( 1083) 00:14:48.096 2.667 - 2.679: 48.6726% ( 4608) 00:14:48.096 2.679 - 2.690: 67.4776% ( 2203) 00:14:48.096 2.690 - 2.702: 73.6833% ( 727) 00:14:48.096 2.702 - 2.714: 82.2621% ( 1005) 00:14:48.096 2.714 - 2.726: 89.2958% ( 824) 00:14:48.096 2.726 - 2.738: 94.4516% ( 604) 00:14:48.096 2.738 - 2.750: 96.5002% ( 240) 00:14:48.096 2.750 - 2.761: 97.1490% ( 76) 00:14:48.096 2.761 - 2.773: 97.5501% ( 47) 00:14:48.096 2.773 - 2.785: 97.7636% ( 25) 00:14:48.096 2.785 - 2.797: 97.9684% ( 24) 00:14:48.096 2.797 - 2.809: 98.0367% ( 8) 00:14:48.096 2.809 - 2.821: 98.1221% ( 10) 00:14:48.096 2.821 - 2.833: 98.1562% ( 4) 00:14:48.096 2.833 - 2.844: 98.1647% ( 1) 00:14:48.096 2.856 - 2.868: 98.1904% ( 3) 00:14:48.096 2.868 - 2.880: 98.1989% ( 1) 00:14:48.096 2.880 - 2.892: 98.2074% ( 1) 00:14:48.096 2.904 - 2.916: 98.2160% ( 1) 00:14:48.096 2.916 - 2.927: 98.2586% ( 5) 00:14:48.096 2.927 - 2.939: 98.2843% ( 3) 00:14:48.096 2.939 - 2.951: 98.2928% ( 1) 00:14:48.096 2.951 - 2.963: 98.3184% ( 3) 00:14:48.096 2.963 - 2.975: 98.3355% ( 2) 00:14:48.096 2.987 - 2.999: 98.3440% ( 1) 00:14:48.096 2.999 - 3.010: 98.3525% ( 1) 00:14:48.096 3.010 - 3.022: 98.3611% ( 1) 00:14:48.096 3.022 - 3.034: 98.3696% ( 1) 00:14:48.096 3.034 - 3.058: 98.4038% ( 4) 00:14:48.096 3.058 - 3.081: 98.4208% ( 2) 00:14:48.096 3.081 - 3.105: 98.4550% ( 4) 00:14:48.096 3.105 - 3.129: 98.4891% ( 4) 00:14:48.096 3.129 - 3.153: 98.5147% ( 3) 00:14:48.096 3.153 - 3.176: 98.5233% ( 1) 00:14:48.096 3.176 - 3.200: 98.5830% ( 7) 00:14:48.096 3.200 - 3.224: 98.6001% ( 2) 00:14:48.096 3.224 - 3.247: 98.6172% ( 2) 00:14:48.096 3.247 - 3.271: 98.6342% ( 2) 00:14:48.096 3.295 - 3.319: 98.6428% ( 1) 00:14:48.096 3.319 - 3.342: 98.6513% ( 1) 00:14:48.096 3.342 - 3.366: 98.6598% ( 1) 00:14:48.096 3.366 - 3.390: 98.6940% ( 4) 00:14:48.096 3.390 - 3.413: 98.7025% ( 1) 00:14:48.096 3.413 - 3.437: 98.7367% ( 4) 00:14:48.096 3.437 - 3.461: 98.7793% ( 5) 00:14:48.096 3.461 - 3.484: 98.7964% ( 2) 00:14:48.096 3.484 - 3.508: 98.8391% ( 5) 00:14:48.096 3.508 - 3.532: 98.8562% ( 2) 00:14:48.096 3.532 - 3.556: 98.8732% ( 2) 00:14:48.096 3.556 - 3.579: 98.9074% ( 4) 00:14:48.096 3.579 - 3.603: 98.9671% ( 7) 00:14:48.096 3.603 - 3.627: 98.9927% ( 3) 00:14:48.096 3.650 - 3.674: 99.0013% ( 1) 00:14:48.096 3.674 - 3.698: 99.0098% ( 1) 00:14:48.096 3.698 - 3.721: 99.0269% ( 2) 00:14:48.096 3.745 - 3.769: 99.0354% ( 1) 00:14:48.096 3.816 - 3.840: 99.0440% ( 1) 00:14:48.096 3.840 - 3.864: 99.0525% ( 1) 00:14:48.096 4.006 - 4.030: 99.0610% ( 1) 00:14:48.096 4.030 - 4.053: 99.0696% ( 1) 00:14:48.096 4.077 - 4.101: 99.0781% ( 1) 00:14:48.096 4.196 - 4.219: 99.0866% ( 1) 00:14:48.096 4.219 - 4.243: 99.0952% ( 1) 00:14:48.096 4.338 - 4.361: 99.1208% ( 3) 00:14:48.096 4.907 - 4.930: 99.1293% ( 1) 00:14:48.096 5.950 - 5.973: 99.1379% ( 1) 00:14:48.096 6.021 - 6.044: 99.1464% ( 1) 00:14:48.096 6.163 - 6.210: 99.1549% ( 1) 00:14:48.096 6.210 - 6.258: 99.1635% ( 1) 00:14:48.096 6.305 - 6.353: 99.1805% ( 2) 00:14:48.096 6.353 - 6.400: 99.1891% ( 1) 00:14:48.096 6.921 - 6.969: 99.1976% ( 1) 00:14:48.096 7.016 - 7.064: 99.2061% ( 1) 00:14:48.096 7.348 - 7.396: 99.2147% ( 1) 00:14:48.096 7.396 - 7.443: 99.2232% ( 1) 00:14:48.096 7.490 - 7.538: 99.2318% ( 1) 00:14:48.096 7.680 - 7.727: 99.2403% ( 1) 00:14:48.096 8.249 - 8.296: 99.2574% ( 2) 00:14:48.096 8.533 - 8.581: 99.2659% ( 1) 00:14:48.096 9.434 - 9.481: 99.2744% ( 1) 00:14:48.096 9.813 - 9.861: 99.2830% ( 1) 00:14:48.096 9.861 - 9.908: 9[2024-07-23 10:35:36.170133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.096 9.2915% ( 1) 00:14:48.096 9.956 - 10.003: 99.3000% ( 1) 00:14:48.096 10.951 - 10.999: 99.3086% ( 1) 00:14:48.096 13.559 - 13.653: 99.3171% ( 1) 00:14:48.096 14.127 - 14.222: 99.3257% ( 1) 00:14:48.096 14.507 - 14.601: 99.3342% ( 1) 00:14:48.096 15.550 - 15.644: 99.3427% ( 1) 00:14:48.096 16.593 - 16.687: 99.3513% ( 1) 00:14:48.096 16.782 - 16.877: 99.3598% ( 1) 00:14:48.096 22.850 - 22.945: 99.3683% ( 1) 00:14:48.097 23.135 - 23.230: 99.3769% ( 1) 00:14:48.097 3980.705 - 4004.978: 99.6756% ( 35) 00:14:48.097 4004.978 - 4029.250: 100.0000% ( 38) 00:14:48.097 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:48.097 [ 00:14:48.097 { 00:14:48.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:48.097 "subtype": "Discovery", 00:14:48.097 "listen_addresses": [], 00:14:48.097 "allow_any_host": true, 00:14:48.097 "hosts": [] 00:14:48.097 }, 00:14:48.097 { 00:14:48.097 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:48.097 "subtype": "NVMe", 00:14:48.097 "listen_addresses": [ 00:14:48.097 { 00:14:48.097 "trtype": "VFIOUSER", 00:14:48.097 "adrfam": "IPv4", 00:14:48.097 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:48.097 "trsvcid": "0" 00:14:48.097 } 00:14:48.097 ], 00:14:48.097 "allow_any_host": true, 00:14:48.097 "hosts": [], 00:14:48.097 "serial_number": "SPDK1", 00:14:48.097 "model_number": "SPDK bdev Controller", 00:14:48.097 "max_namespaces": 32, 00:14:48.097 "min_cntlid": 1, 00:14:48.097 "max_cntlid": 65519, 00:14:48.097 "namespaces": [ 00:14:48.097 { 00:14:48.097 "nsid": 1, 00:14:48.097 "bdev_name": "Malloc1", 00:14:48.097 "name": "Malloc1", 00:14:48.097 "nguid": "6C80574F54C042E8B0354789303FBEA2", 00:14:48.097 "uuid": "6c80574f-54c0-42e8-b035-4789303fbea2" 00:14:48.097 } 00:14:48.097 ] 00:14:48.097 }, 00:14:48.097 { 00:14:48.097 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:48.097 "subtype": "NVMe", 00:14:48.097 "listen_addresses": [ 00:14:48.097 { 00:14:48.097 "trtype": "VFIOUSER", 00:14:48.097 "adrfam": "IPv4", 00:14:48.097 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:48.097 "trsvcid": "0" 00:14:48.097 } 00:14:48.097 ], 00:14:48.097 "allow_any_host": true, 00:14:48.097 "hosts": [], 00:14:48.097 "serial_number": "SPDK2", 00:14:48.097 "model_number": "SPDK bdev Controller", 00:14:48.097 "max_namespaces": 32, 00:14:48.097 "min_cntlid": 1, 00:14:48.097 "max_cntlid": 65519, 00:14:48.097 "namespaces": [ 00:14:48.097 { 00:14:48.097 "nsid": 1, 00:14:48.097 "bdev_name": "Malloc2", 00:14:48.097 "name": "Malloc2", 00:14:48.097 "nguid": "972AFCCB72624AE2B2F465C07DD71002", 00:14:48.097 "uuid": "972afccb-7262-4ae2-b2f4-65c07dd71002" 00:14:48.097 } 00:14:48.097 ] 00:14:48.097 } 00:14:48.097 ] 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3795640 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:48.097 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:48.097 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.355 [2024-07-23 10:35:36.675040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.355 Malloc3 00:14:48.355 10:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:48.922 [2024-07-23 10:35:37.124520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.922 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:48.922 Asynchronous Event Request test 00:14:48.922 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.922 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.922 Registering asynchronous event callbacks... 00:14:48.922 Starting namespace attribute notice tests for all controllers... 00:14:48.922 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:48.922 aer_cb - Changed Namespace 00:14:48.922 Cleaning up... 00:14:48.922 [ 00:14:48.922 { 00:14:48.922 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:48.922 "subtype": "Discovery", 00:14:48.922 "listen_addresses": [], 00:14:48.922 "allow_any_host": true, 00:14:48.922 "hosts": [] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:48.922 "subtype": "NVMe", 00:14:48.922 "listen_addresses": [ 00:14:48.922 { 00:14:48.922 "trtype": "VFIOUSER", 00:14:48.922 "adrfam": "IPv4", 00:14:48.922 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:48.922 "trsvcid": "0" 00:14:48.922 } 00:14:48.922 ], 00:14:48.922 "allow_any_host": true, 00:14:48.922 "hosts": [], 00:14:48.922 "serial_number": "SPDK1", 00:14:48.922 "model_number": "SPDK bdev Controller", 00:14:48.922 "max_namespaces": 32, 00:14:48.922 "min_cntlid": 1, 00:14:48.922 "max_cntlid": 65519, 00:14:48.922 "namespaces": [ 00:14:48.922 { 00:14:48.922 "nsid": 1, 00:14:48.922 "bdev_name": "Malloc1", 00:14:48.922 "name": "Malloc1", 00:14:48.922 "nguid": "6C80574F54C042E8B0354789303FBEA2", 00:14:48.922 "uuid": "6c80574f-54c0-42e8-b035-4789303fbea2" 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "nsid": 2, 00:14:48.922 "bdev_name": "Malloc3", 00:14:48.922 "name": "Malloc3", 00:14:48.922 "nguid": "5AAA2EF4A9F54969AE8B04F474A90B4B", 00:14:48.922 "uuid": "5aaa2ef4-a9f5-4969-ae8b-04f474a90b4b" 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:48.922 "subtype": "NVMe", 00:14:48.922 "listen_addresses": [ 00:14:48.922 { 00:14:48.922 "trtype": "VFIOUSER", 00:14:48.922 "adrfam": "IPv4", 00:14:48.922 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:48.922 "trsvcid": "0" 00:14:48.922 } 00:14:48.922 ], 00:14:48.922 "allow_any_host": true, 00:14:48.922 "hosts": [], 00:14:48.922 "serial_number": "SPDK2", 00:14:48.922 "model_number": "SPDK bdev Controller", 00:14:48.922 "max_namespaces": 32, 00:14:48.922 "min_cntlid": 1, 00:14:48.922 "max_cntlid": 65519, 00:14:48.922 "namespaces": [ 00:14:48.922 { 00:14:48.922 "nsid": 1, 00:14:48.922 "bdev_name": "Malloc2", 00:14:48.922 "name": "Malloc2", 00:14:48.922 "nguid": "972AFCCB72624AE2B2F465C07DD71002", 00:14:48.922 "uuid": "972afccb-7262-4ae2-b2f4-65c07dd71002" 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 } 00:14:48.922 ] 00:14:49.183 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3795640 00:14:49.183 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.183 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:49.183 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:49.183 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:49.183 [2024-07-23 10:35:37.452994] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:49.183 [2024-07-23 10:35:37.453044] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795685 ] 00:14:49.183 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.183 [2024-07-23 10:35:37.496265] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:49.183 [2024-07-23 10:35:37.503799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.183 [2024-07-23 10:35:37.503832] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f17d923b000 00:14:49.183 [2024-07-23 10:35:37.504804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.505806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.506812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.507820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.508828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.509830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.510836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.511847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.183 [2024-07-23 10:35:37.512870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.183 [2024-07-23 10:35:37.512894] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f17d7ff1000 00:14:49.183 [2024-07-23 10:35:37.514378] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.183 [2024-07-23 10:35:37.533455] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:49.183 [2024-07-23 10:35:37.533500] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:49.183 [2024-07-23 10:35:37.538624] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:49.183 [2024-07-23 10:35:37.538684] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:49.183 [2024-07-23 10:35:37.538779] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:49.183 [2024-07-23 10:35:37.538806] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:49.183 [2024-07-23 10:35:37.538818] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:49.183 [2024-07-23 10:35:37.539635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:49.183 [2024-07-23 10:35:37.539670] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:49.183 [2024-07-23 10:35:37.539686] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:49.183 [2024-07-23 10:35:37.540638] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:49.183 [2024-07-23 10:35:37.540659] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:49.183 [2024-07-23 10:35:37.540674] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.541640] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:49.183 [2024-07-23 10:35:37.541662] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.542649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:49.183 [2024-07-23 10:35:37.542671] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:49.183 [2024-07-23 10:35:37.542681] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.542694] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.542805] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:49.183 [2024-07-23 10:35:37.542819] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.542829] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:49.183 [2024-07-23 10:35:37.543656] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:49.183 [2024-07-23 10:35:37.544659] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:49.183 [2024-07-23 10:35:37.545666] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:49.183 [2024-07-23 10:35:37.546657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.183 [2024-07-23 10:35:37.546731] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:49.183 [2024-07-23 10:35:37.547680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:49.183 [2024-07-23 10:35:37.547700] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:49.183 [2024-07-23 10:35:37.547710] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:49.183 [2024-07-23 10:35:37.547738] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:49.183 [2024-07-23 10:35:37.547754] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:49.183 [2024-07-23 10:35:37.547781] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.183 [2024-07-23 10:35:37.547792] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.183 [2024-07-23 10:35:37.547812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.183 [2024-07-23 10:35:37.556495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:49.183 [2024-07-23 10:35:37.556525] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:49.183 [2024-07-23 10:35:37.556536] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:49.183 [2024-07-23 10:35:37.556545] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:49.183 [2024-07-23 10:35:37.556554] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:49.183 [2024-07-23 10:35:37.556563] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:49.183 [2024-07-23 10:35:37.556571] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:49.183 [2024-07-23 10:35:37.556581] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:49.183 [2024-07-23 10:35:37.556595] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:49.183 [2024-07-23 10:35:37.556613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:49.183 [2024-07-23 10:35:37.564491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.564518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.184 [2024-07-23 10:35:37.564533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.184 [2024-07-23 10:35:37.564547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.184 [2024-07-23 10:35:37.564561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.184 [2024-07-23 10:35:37.564571] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.564589] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.564606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.572504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.572523] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:49.184 [2024-07-23 10:35:37.572534] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.572548] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.572564] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.572581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.580493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.580576] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.580595] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.580610] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:49.184 [2024-07-23 10:35:37.580620] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:49.184 [2024-07-23 10:35:37.580631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.588527] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:49.184 [2024-07-23 10:35:37.588546] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.588563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.588577] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.184 [2024-07-23 10:35:37.588586] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.184 [2024-07-23 10:35:37.588601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.596492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.596525] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.596543] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.596558] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.184 [2024-07-23 10:35:37.596567] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.184 [2024-07-23 10:35:37.596579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.604518] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604532] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604548] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604560] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604570] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604579] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:49.184 [2024-07-23 10:35:37.604588] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:49.184 [2024-07-23 10:35:37.604598] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:49.184 [2024-07-23 10:35:37.604630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.612520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.617526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.617564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.624493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.624521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.632494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.632525] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:49.184 [2024-07-23 10:35:37.632540] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:49.184 [2024-07-23 10:35:37.632548] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:49.184 [2024-07-23 10:35:37.632555] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:49.184 [2024-07-23 10:35:37.632566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:49.184 [2024-07-23 10:35:37.632580] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:49.184 [2024-07-23 10:35:37.632589] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:49.184 [2024-07-23 10:35:37.632599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.632612] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:49.184 [2024-07-23 10:35:37.632621] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.184 [2024-07-23 10:35:37.632631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.632645] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:49.184 [2024-07-23 10:35:37.632654] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:49.184 [2024-07-23 10:35:37.632665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:49.184 [2024-07-23 10:35:37.640495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.640524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.640543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:49.184 [2024-07-23 10:35:37.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:49.184 ===================================================== 00:14:49.184 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:49.184 ===================================================== 00:14:49.184 Controller Capabilities/Features 00:14:49.184 ================================ 00:14:49.184 Vendor ID: 4e58 00:14:49.184 Subsystem Vendor ID: 4e58 00:14:49.184 Serial Number: SPDK2 00:14:49.184 Model Number: SPDK bdev Controller 00:14:49.184 Firmware Version: 24.05.1 00:14:49.184 Recommended Arb Burst: 6 00:14:49.184 IEEE OUI Identifier: 8d 6b 50 00:14:49.184 Multi-path I/O 00:14:49.184 May have multiple subsystem ports: Yes 00:14:49.184 May have multiple controllers: Yes 00:14:49.184 Associated with SR-IOV VF: No 00:14:49.184 Max Data Transfer Size: 131072 00:14:49.184 Max Number of Namespaces: 32 00:14:49.184 Max Number of I/O Queues: 127 00:14:49.184 NVMe Specification Version (VS): 1.3 00:14:49.184 NVMe Specification Version (Identify): 1.3 00:14:49.184 Maximum Queue Entries: 256 00:14:49.184 Contiguous Queues Required: Yes 00:14:49.184 Arbitration Mechanisms Supported 00:14:49.184 Weighted Round Robin: Not Supported 00:14:49.184 Vendor Specific: Not Supported 00:14:49.184 Reset Timeout: 15000 ms 00:14:49.184 Doorbell Stride: 4 bytes 00:14:49.184 NVM Subsystem Reset: Not Supported 00:14:49.184 Command Sets Supported 00:14:49.184 NVM Command Set: Supported 00:14:49.184 Boot Partition: Not Supported 00:14:49.184 Memory Page Size Minimum: 4096 bytes 00:14:49.184 Memory Page Size Maximum: 4096 bytes 00:14:49.185 Persistent Memory Region: Not Supported 00:14:49.185 Optional Asynchronous Events Supported 00:14:49.185 Namespace Attribute Notices: Supported 00:14:49.185 Firmware Activation Notices: Not Supported 00:14:49.185 ANA Change Notices: Not Supported 00:14:49.185 PLE Aggregate Log Change Notices: Not Supported 00:14:49.185 LBA Status Info Alert Notices: Not Supported 00:14:49.185 EGE Aggregate Log Change Notices: Not Supported 00:14:49.185 Normal NVM Subsystem Shutdown event: Not Supported 00:14:49.185 Zone Descriptor Change Notices: Not Supported 00:14:49.185 Discovery Log Change Notices: Not Supported 00:14:49.185 Controller Attributes 00:14:49.185 128-bit Host Identifier: Supported 00:14:49.185 Non-Operational Permissive Mode: Not Supported 00:14:49.185 NVM Sets: Not Supported 00:14:49.185 Read Recovery Levels: Not Supported 00:14:49.185 Endurance Groups: Not Supported 00:14:49.185 Predictable Latency Mode: Not Supported 00:14:49.185 Traffic Based Keep ALive: Not Supported 00:14:49.185 Namespace Granularity: Not Supported 00:14:49.185 SQ Associations: Not Supported 00:14:49.185 UUID List: Not Supported 00:14:49.185 Multi-Domain Subsystem: Not Supported 00:14:49.185 Fixed Capacity Management: Not Supported 00:14:49.185 Variable Capacity Management: Not Supported 00:14:49.185 Delete Endurance Group: Not Supported 00:14:49.185 Delete NVM Set: Not Supported 00:14:49.185 Extended LBA Formats Supported: Not Supported 00:14:49.185 Flexible Data Placement Supported: Not Supported 00:14:49.185 00:14:49.185 Controller Memory Buffer Support 00:14:49.185 ================================ 00:14:49.185 Supported: No 00:14:49.185 00:14:49.185 Persistent Memory Region Support 00:14:49.185 ================================ 00:14:49.185 Supported: No 00:14:49.185 00:14:49.185 Admin Command Set Attributes 00:14:49.185 ============================ 00:14:49.185 Security Send/Receive: Not Supported 00:14:49.185 Format NVM: Not Supported 00:14:49.185 Firmware Activate/Download: Not Supported 00:14:49.185 Namespace Management: Not Supported 00:14:49.185 Device Self-Test: Not Supported 00:14:49.185 Directives: Not Supported 00:14:49.185 NVMe-MI: Not Supported 00:14:49.185 Virtualization Management: Not Supported 00:14:49.185 Doorbell Buffer Config: Not Supported 00:14:49.185 Get LBA Status Capability: Not Supported 00:14:49.185 Command & Feature Lockdown Capability: Not Supported 00:14:49.185 Abort Command Limit: 4 00:14:49.185 Async Event Request Limit: 4 00:14:49.185 Number of Firmware Slots: N/A 00:14:49.185 Firmware Slot 1 Read-Only: N/A 00:14:49.185 Firmware Activation Without Reset: N/A 00:14:49.185 Multiple Update Detection Support: N/A 00:14:49.185 Firmware Update Granularity: No Information Provided 00:14:49.185 Per-Namespace SMART Log: No 00:14:49.185 Asymmetric Namespace Access Log Page: Not Supported 00:14:49.185 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:49.185 Command Effects Log Page: Supported 00:14:49.185 Get Log Page Extended Data: Supported 00:14:49.185 Telemetry Log Pages: Not Supported 00:14:49.185 Persistent Event Log Pages: Not Supported 00:14:49.185 Supported Log Pages Log Page: May Support 00:14:49.185 Commands Supported & Effects Log Page: Not Supported 00:14:49.185 Feature Identifiers & Effects Log Page:May Support 00:14:49.185 NVMe-MI Commands & Effects Log Page: May Support 00:14:49.185 Data Area 4 for Telemetry Log: Not Supported 00:14:49.185 Error Log Page Entries Supported: 128 00:14:49.185 Keep Alive: Supported 00:14:49.185 Keep Alive Granularity: 10000 ms 00:14:49.185 00:14:49.185 NVM Command Set Attributes 00:14:49.185 ========================== 00:14:49.185 Submission Queue Entry Size 00:14:49.185 Max: 64 00:14:49.185 Min: 64 00:14:49.185 Completion Queue Entry Size 00:14:49.185 Max: 16 00:14:49.185 Min: 16 00:14:49.185 Number of Namespaces: 32 00:14:49.185 Compare Command: Supported 00:14:49.185 Write Uncorrectable Command: Not Supported 00:14:49.185 Dataset Management Command: Supported 00:14:49.185 Write Zeroes Command: Supported 00:14:49.185 Set Features Save Field: Not Supported 00:14:49.185 Reservations: Not Supported 00:14:49.185 Timestamp: Not Supported 00:14:49.185 Copy: Supported 00:14:49.185 Volatile Write Cache: Present 00:14:49.185 Atomic Write Unit (Normal): 1 00:14:49.185 Atomic Write Unit (PFail): 1 00:14:49.185 Atomic Compare & Write Unit: 1 00:14:49.185 Fused Compare & Write: Supported 00:14:49.185 Scatter-Gather List 00:14:49.185 SGL Command Set: Supported (Dword aligned) 00:14:49.185 SGL Keyed: Not Supported 00:14:49.185 SGL Bit Bucket Descriptor: Not Supported 00:14:49.185 SGL Metadata Pointer: Not Supported 00:14:49.185 Oversized SGL: Not Supported 00:14:49.185 SGL Metadata Address: Not Supported 00:14:49.185 SGL Offset: Not Supported 00:14:49.185 Transport SGL Data Block: Not Supported 00:14:49.185 Replay Protected Memory Block: Not Supported 00:14:49.185 00:14:49.185 Firmware Slot Information 00:14:49.185 ========================= 00:14:49.185 Active slot: 1 00:14:49.185 Slot 1 Firmware Revision: 24.05.1 00:14:49.185 00:14:49.185 00:14:49.185 Commands Supported and Effects 00:14:49.185 ============================== 00:14:49.185 Admin Commands 00:14:49.185 -------------- 00:14:49.185 Get Log Page (02h): Supported 00:14:49.185 Identify (06h): Supported 00:14:49.185 Abort (08h): Supported 00:14:49.185 Set Features (09h): Supported 00:14:49.185 Get Features (0Ah): Supported 00:14:49.185 Asynchronous Event Request (0Ch): Supported 00:14:49.185 Keep Alive (18h): Supported 00:14:49.185 I/O Commands 00:14:49.185 ------------ 00:14:49.185 Flush (00h): Supported LBA-Change 00:14:49.185 Write (01h): Supported LBA-Change 00:14:49.185 Read (02h): Supported 00:14:49.185 Compare (05h): Supported 00:14:49.185 Write Zeroes (08h): Supported LBA-Change 00:14:49.185 Dataset Management (09h): Supported LBA-Change 00:14:49.185 Copy (19h): Supported LBA-Change 00:14:49.185 Unknown (79h): Supported LBA-Change 00:14:49.185 Unknown (7Ah): Supported 00:14:49.185 00:14:49.185 Error Log 00:14:49.185 ========= 00:14:49.185 00:14:49.185 Arbitration 00:14:49.185 =========== 00:14:49.185 Arbitration Burst: 1 00:14:49.185 00:14:49.185 Power Management 00:14:49.185 ================ 00:14:49.185 Number of Power States: 1 00:14:49.185 Current Power State: Power State #0 00:14:49.185 Power State #0: 00:14:49.185 Max Power: 0.00 W 00:14:49.185 Non-Operational State: Operational 00:14:49.185 Entry Latency: Not Reported 00:14:49.185 Exit Latency: Not Reported 00:14:49.185 Relative Read Throughput: 0 00:14:49.185 Relative Read Latency: 0 00:14:49.185 Relative Write Throughput: 0 00:14:49.185 Relative Write Latency: 0 00:14:49.185 Idle Power: Not Reported 00:14:49.185 Active Power: Not Reported 00:14:49.185 Non-Operational Permissive Mode: Not Supported 00:14:49.185 00:14:49.185 Health Information 00:14:49.185 ================== 00:14:49.185 Critical Warnings: 00:14:49.185 Available Spare Space: OK 00:14:49.185 Temperature: OK 00:14:49.185 Device Reliability: OK 00:14:49.185 Read Only: No 00:14:49.185 Volatile Memory Backup: OK 00:14:49.185 Current Temperature: 0 Kelvin[2024-07-23 10:35:37.640705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:49.185 [2024-07-23 10:35:37.648491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:49.185 [2024-07-23 10:35:37.648540] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:49.185 [2024-07-23 10:35:37.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.185 [2024-07-23 10:35:37.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.185 [2024-07-23 10:35:37.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.185 [2024-07-23 10:35:37.648595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.185 [2024-07-23 10:35:37.652493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:49.185 [2024-07-23 10:35:37.652516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:49.185 [2024-07-23 10:35:37.652709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.185 [2024-07-23 10:35:37.652790] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:49.185 [2024-07-23 10:35:37.652810] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:49.185 [2024-07-23 10:35:37.653719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:49.185 [2024-07-23 10:35:37.653745] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:49.185 [2024-07-23 10:35:37.653818] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:49.186 [2024-07-23 10:35:37.655351] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.445 (-273 Celsius) 00:14:49.445 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:49.445 Available Spare: 0% 00:14:49.445 Available Spare Threshold: 0% 00:14:49.445 Life Percentage Used: 0% 00:14:49.445 Data Units Read: 0 00:14:49.445 Data Units Written: 0 00:14:49.445 Host Read Commands: 0 00:14:49.445 Host Write Commands: 0 00:14:49.445 Controller Busy Time: 0 minutes 00:14:49.445 Power Cycles: 0 00:14:49.445 Power On Hours: 0 hours 00:14:49.445 Unsafe Shutdowns: 0 00:14:49.445 Unrecoverable Media Errors: 0 00:14:49.445 Lifetime Error Log Entries: 0 00:14:49.445 Warning Temperature Time: 0 minutes 00:14:49.445 Critical Temperature Time: 0 minutes 00:14:49.445 00:14:49.445 Number of Queues 00:14:49.445 ================ 00:14:49.445 Number of I/O Submission Queues: 127 00:14:49.445 Number of I/O Completion Queues: 127 00:14:49.445 00:14:49.445 Active Namespaces 00:14:49.445 ================= 00:14:49.445 Namespace ID:1 00:14:49.445 Error Recovery Timeout: Unlimited 00:14:49.445 Command Set Identifier: NVM (00h) 00:14:49.445 Deallocate: Supported 00:14:49.445 Deallocated/Unwritten Error: Not Supported 00:14:49.445 Deallocated Read Value: Unknown 00:14:49.445 Deallocate in Write Zeroes: Not Supported 00:14:49.445 Deallocated Guard Field: 0xFFFF 00:14:49.445 Flush: Supported 00:14:49.445 Reservation: Supported 00:14:49.445 Namespace Sharing Capabilities: Multiple Controllers 00:14:49.445 Size (in LBAs): 131072 (0GiB) 00:14:49.445 Capacity (in LBAs): 131072 (0GiB) 00:14:49.445 Utilization (in LBAs): 131072 (0GiB) 00:14:49.445 NGUID: 972AFCCB72624AE2B2F465C07DD71002 00:14:49.445 UUID: 972afccb-7262-4ae2-b2f4-65c07dd71002 00:14:49.445 Thin Provisioning: Not Supported 00:14:49.445 Per-NS Atomic Units: Yes 00:14:49.445 Atomic Boundary Size (Normal): 0 00:14:49.445 Atomic Boundary Size (PFail): 0 00:14:49.445 Atomic Boundary Offset: 0 00:14:49.445 Maximum Single Source Range Length: 65535 00:14:49.445 Maximum Copy Length: 65535 00:14:49.445 Maximum Source Range Count: 1 00:14:49.445 NGUID/EUI64 Never Reused: No 00:14:49.445 Namespace Write Protected: No 00:14:49.445 Number of LBA Formats: 1 00:14:49.445 Current LBA Format: LBA Format #00 00:14:49.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:49.445 00:14:49.445 10:35:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:49.445 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.445 [2024-07-23 10:35:37.870827] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.724 Initializing NVMe Controllers 00:14:54.724 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:54.724 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:54.724 Initialization complete. Launching workers. 00:14:54.724 ======================================================== 00:14:54.724 Latency(us) 00:14:54.724 Device Information : IOPS MiB/s Average min max 00:14:54.724 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24078.00 94.05 5321.50 1468.14 10536.66 00:14:54.724 ======================================================== 00:14:54.724 Total : 24078.00 94.05 5321.50 1468.14 10536.66 00:14:54.724 00:14:54.724 [2024-07-23 10:35:42.978792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.724 10:35:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:54.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.724 [2024-07-23 10:35:43.203457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.009 Initializing NVMe Controllers 00:15:00.009 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:00.009 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:00.009 Initialization complete. Launching workers. 00:15:00.009 ======================================================== 00:15:00.009 Latency(us) 00:15:00.009 Device Information : IOPS MiB/s Average min max 00:15:00.009 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24158.40 94.37 5302.78 1464.17 10512.83 00:15:00.009 ======================================================== 00:15:00.009 Total : 24158.40 94.37 5302.78 1464.17 10512.83 00:15:00.009 00:15:00.009 [2024-07-23 10:35:48.225293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.009 10:35:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.009 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.009 [2024-07-23 10:35:48.445769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.286 [2024-07-23 10:35:53.580626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.286 Initializing NVMe Controllers 00:15:05.286 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.286 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:05.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:05.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:05.286 Initialization complete. Launching workers. 00:15:05.286 Starting thread on core 2 00:15:05.286 Starting thread on core 3 00:15:05.286 Starting thread on core 1 00:15:05.286 10:35:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:05.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.545 [2024-07-23 10:35:53.869043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.743 [2024-07-23 10:35:57.726685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.743 Initializing NVMe Controllers 00:15:09.743 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.743 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.743 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:09.743 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:09.743 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:09.743 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:09.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:09.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:09.743 Initialization complete. Launching workers. 00:15:09.743 Starting thread on core 1 with urgent priority queue 00:15:09.743 Starting thread on core 2 with urgent priority queue 00:15:09.743 Starting thread on core 3 with urgent priority queue 00:15:09.743 Starting thread on core 0 with urgent priority queue 00:15:09.743 SPDK bdev Controller (SPDK2 ) core 0: 4414.00 IO/s 22.66 secs/100000 ios 00:15:09.743 SPDK bdev Controller (SPDK2 ) core 1: 2987.33 IO/s 33.47 secs/100000 ios 00:15:09.743 SPDK bdev Controller (SPDK2 ) core 2: 4195.00 IO/s 23.84 secs/100000 ios 00:15:09.743 SPDK bdev Controller (SPDK2 ) core 3: 3679.67 IO/s 27.18 secs/100000 ios 00:15:09.743 ======================================================== 00:15:09.743 00:15:09.743 10:35:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:09.743 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.743 [2024-07-23 10:35:58.000034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.743 Initializing NVMe Controllers 00:15:09.743 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.743 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:09.743 Namespace ID: 1 size: 0GB 00:15:09.743 Initialization complete. 00:15:09.744 INFO: using host memory buffer for IO 00:15:09.744 Hello world! 00:15:09.744 [2024-07-23 10:35:58.012147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.744 10:35:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:09.744 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.004 [2024-07-23 10:35:58.282498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.945 Initializing NVMe Controllers 00:15:10.945 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.945 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.945 Initialization complete. Launching workers. 00:15:10.945 submit (in ns) avg, min, max = 10120.8, 4477.0, 4017672.6 00:15:10.945 complete (in ns) avg, min, max = 28945.9, 2651.9, 6001891.9 00:15:10.945 00:15:10.945 Submit histogram 00:15:10.945 ================ 00:15:10.945 Range in us Cumulative Count 00:15:10.945 4.456 - 4.480: 0.0084% ( 1) 00:15:10.945 4.480 - 4.504: 0.1596% ( 18) 00:15:10.945 4.504 - 4.527: 0.9745% ( 97) 00:15:10.945 4.527 - 4.551: 3.1586% ( 260) 00:15:10.945 4.551 - 4.575: 6.9052% ( 446) 00:15:10.945 4.575 - 4.599: 10.0134% ( 370) 00:15:10.945 4.599 - 4.622: 12.3404% ( 277) 00:15:10.945 4.622 - 4.646: 13.6089% ( 151) 00:15:10.945 4.646 - 4.670: 14.3733% ( 91) 00:15:10.945 4.670 - 4.693: 15.1798% ( 96) 00:15:10.945 4.693 - 4.717: 17.0783% ( 226) 00:15:10.945 4.717 - 4.741: 20.6737% ( 428) 00:15:10.945 4.741 - 4.764: 25.4200% ( 565) 00:15:10.945 4.764 - 4.788: 28.3770% ( 352) 00:15:10.945 4.788 - 4.812: 30.2923% ( 228) 00:15:10.945 4.812 - 4.836: 30.8720% ( 69) 00:15:10.945 4.836 - 4.859: 31.3004% ( 51) 00:15:10.945 4.859 - 4.883: 31.6364% ( 40) 00:15:10.945 4.883 - 4.907: 31.9892% ( 42) 00:15:10.945 4.907 - 4.930: 32.3589% ( 44) 00:15:10.945 4.930 - 4.954: 32.7117% ( 42) 00:15:10.945 4.954 - 4.978: 32.9385% ( 27) 00:15:10.945 4.978 - 5.001: 33.1149% ( 21) 00:15:10.945 5.001 - 5.025: 33.2997% ( 22) 00:15:10.945 5.025 - 5.049: 33.3501% ( 6) 00:15:10.945 5.049 - 5.073: 33.3837% ( 4) 00:15:10.945 5.073 - 5.096: 33.4509% ( 8) 00:15:10.945 5.096 - 5.120: 33.7366% ( 34) 00:15:10.945 5.120 - 5.144: 34.4590% ( 86) 00:15:10.945 5.144 - 5.167: 40.6838% ( 741) 00:15:10.945 5.167 - 5.191: 45.6401% ( 590) 00:15:10.945 5.191 - 5.215: 47.9923% ( 280) 00:15:10.945 5.215 - 5.239: 49.4624% ( 175) 00:15:10.945 5.239 - 5.262: 50.6132% ( 137) 00:15:10.945 5.262 - 5.286: 51.6045% ( 118) 00:15:10.945 5.286 - 5.310: 56.8380% ( 623) 00:15:10.945 5.310 - 5.333: 61.9876% ( 613) 00:15:10.945 5.333 - 5.357: 64.8774% ( 344) 00:15:10.945 5.357 - 5.381: 66.2046% ( 158) 00:15:10.945 5.381 - 5.404: 68.5904% ( 284) 00:15:10.945 5.404 - 5.428: 70.1193% ( 182) 00:15:10.945 5.428 - 5.452: 70.8837% ( 91) 00:15:10.945 5.452 - 5.476: 71.1862% ( 36) 00:15:10.945 5.476 - 5.499: 71.3038% ( 14) 00:15:10.945 5.499 - 5.523: 71.4214% ( 14) 00:15:10.945 5.523 - 5.547: 81.0652% ( 1148) 00:15:10.945 5.547 - 5.570: 87.3320% ( 746) 00:15:10.945 5.570 - 5.594: 91.3390% ( 477) 00:15:10.945 5.594 - 5.618: 92.9856% ( 196) 00:15:10.945 5.618 - 5.641: 93.9432% ( 114) 00:15:10.945 5.641 - 5.665: 94.4304% ( 58) 00:15:10.945 5.665 - 5.689: 94.6825% ( 30) 00:15:10.945 5.689 - 5.713: 94.8085% ( 15) 00:15:10.945 5.713 - 5.736: 94.9177% ( 13) 00:15:10.945 5.736 - 5.760: 94.9597% ( 5) 00:15:10.945 5.760 - 5.784: 95.2537% ( 35) 00:15:10.945 5.784 - 5.807: 95.4637% ( 25) 00:15:10.945 5.807 - 5.831: 95.6821% ( 26) 00:15:10.945 5.831 - 5.855: 95.7409% ( 7) 00:15:10.945 5.855 - 5.879: 95.8753% ( 16) 00:15:10.945 5.879 - 5.902: 95.9509% ( 9) 00:15:10.945 5.902 - 5.926: 96.0013% ( 6) 00:15:10.945 5.926 - 5.950: 96.0853% ( 10) 00:15:10.945 5.950 - 5.973: 96.1274% ( 5) 00:15:10.945 5.973 - 5.997: 96.1778% ( 6) 00:15:10.945 5.997 - 6.021: 96.1946% ( 2) 00:15:10.945 6.021 - 6.044: 96.2366% ( 5) 00:15:10.945 6.044 - 6.068: 96.2786% ( 5) 00:15:10.945 6.068 - 6.116: 96.3878% ( 13) 00:15:10.945 6.116 - 6.163: 96.4718% ( 10) 00:15:10.945 6.163 - 6.210: 96.5054% ( 4) 00:15:10.945 6.210 - 6.258: 96.5978% ( 11) 00:15:10.945 6.258 - 6.305: 96.7154% ( 14) 00:15:10.945 6.305 - 6.353: 96.9758% ( 31) 00:15:10.945 6.353 - 6.400: 97.0514% ( 9) 00:15:10.945 6.400 - 6.447: 97.0766% ( 3) 00:15:10.945 6.447 - 6.495: 97.2362% ( 19) 00:15:10.945 6.495 - 6.542: 97.2782% ( 5) 00:15:10.945 6.542 - 6.590: 97.3202% ( 5) 00:15:10.945 6.590 - 6.637: 97.4210% ( 12) 00:15:10.946 6.637 - 6.684: 97.5050% ( 10) 00:15:10.946 6.684 - 6.732: 97.5134% ( 1) 00:15:10.946 6.732 - 6.779: 97.5386% ( 3) 00:15:10.946 6.779 - 6.827: 97.5974% ( 7) 00:15:10.946 6.827 - 6.874: 98.1939% ( 71) 00:15:10.946 6.874 - 6.921: 98.5971% ( 48) 00:15:10.946 6.921 - 6.969: 98.8323% ( 28) 00:15:10.946 6.969 - 7.016: 99.0003% ( 20) 00:15:10.946 7.016 - 7.064: 99.0255% ( 3) 00:15:10.946 7.064 - 7.111: 99.0675% ( 5) 00:15:10.946 7.159 - 7.206: 99.0759% ( 1) 00:15:10.946 7.206 - 7.253: 99.1011% ( 3) 00:15:10.946 7.348 - 7.396: 99.1095% ( 1) 00:15:10.946 7.822 - 7.870: 99.1179% ( 1) 00:15:10.946 7.870 - 7.917: 99.1347% ( 2) 00:15:10.946 8.296 - 8.344: 99.1515% ( 2) 00:15:10.946 8.391 - 8.439: 99.1683% ( 2) 00:15:10.946 8.439 - 8.486: 99.1851% ( 2) 00:15:10.946 8.486 - 8.533: 99.1935% ( 1) 00:15:10.946 8.676 - 8.723: 99.2103% ( 2) 00:15:10.946 8.723 - 8.770: 99.2188% ( 1) 00:15:10.946 8.818 - 8.865: 99.2272% ( 1) 00:15:10.946 8.865 - 8.913: 99.2440% ( 2) 00:15:10.946 8.913 - 8.960: 99.2692% ( 3) 00:15:10.946 9.292 - 9.339: 99.2776% ( 1) 00:15:10.946 9.387 - 9.434: 99.2860% ( 1) 00:15:10.946 9.529 - 9.576: 99.3028% ( 2) 00:15:10.946 9.719 - 9.766: 99.3112% ( 1) 00:15:10.946 9.766 - 9.813: 99.3280% ( 2) 00:15:10.946 9.813 - 9.861: 99.3364% ( 1) 00:15:10.946 9.908 - 9.956: 99.3448% ( 1) 00:15:10.946 9.956 - 10.003: 99.3532% ( 1) 00:15:10.946 10.003 - 10.050: 99.3616% ( 1) 00:15:10.946 10.145 - 10.193: 99.3784% ( 2) 00:15:10.946 10.287 - 10.335: 99.3868% ( 1) 00:15:10.946 10.430 - 10.477: 99.3952% ( 1) 00:15:10.946 10.619 - 10.667: 99.4036% ( 1) 00:15:10.946 10.809 - 10.856: 99.4204% ( 2) 00:15:10.946 11.141 - 11.188: 99.4372% ( 2) 00:15:10.946 11.378 - 11.425: 99.4456% ( 1) 00:15:10.946 11.567 - 11.615: 99.4540% ( 1) 00:15:10.946 12.136 - 12.231: 99.4624% ( 1) 00:15:10.946 12.326 - 12.421: 99.4708% ( 1) 00:15:10.946 12.421 - 12.516: 99.4792% ( 1) 00:15:10.946 12.516 - 12.610: 99.4876% ( 1) 00:15:10.946 12.800 - 12.895: 99.4960% ( 1) 00:15:10.946 13.084 - 13.179: 99.5044% ( 1) 00:15:10.946 13.179 - 13.274: 99.5128% ( 1) 00:15:10.946 13.274 - 13.369: 99.5212% ( 1) 00:15:10.946 13.464 - 13.559: 99.5296% ( 1) 00:15:10.946 13.653 - 13.748: 99.5380% ( 1) 00:15:10.946 13.748 - 13.843: 99.6052% ( 8) 00:15:10.946 13.843 - 13.938: 99.6892% ( 10) 00:15:10.946 13.938 - 14.033: 99.7396% ( 6) 00:15:10.946 14.127 - 14.222: 99.7480% ( 1) 00:15:10.946 14.222 - 14.317: 99.7564% ( 1) 00:15:10.946 14.317 - 14.412: 99.7732% ( 2) 00:15:10.946 14.412 - 14.507: 99.7816% ( 1) 00:15:10.946 14.507 - 14.601: 99.7900% ( 1) 00:15:10.946 14.601 - 14.696: 99.7984% ( 1) 00:15:10.946 15.265 - 15.360: 99.8068% ( 1) 00:15:10.946 15.550 - 15.644: 99.8152% ( 1) 00:15:10.946 16.498 - 16.593: 99.8236% ( 1) 00:15:10.946 16.972 - 17.067: 99.8320% ( 1) 00:15:10.946 17.067 - 17.161: 99.8404% ( 1) 00:15:10.946 17.351 - 17.446: 99.8572% ( 2) 00:15:10.946 18.868 - 18.963: 99.8656% ( 1) 00:15:10.946 22.187 - 22.281: 99.8740% ( 1) 00:15:10.946 3009.801 - 3021.938: 99.8824% ( 1) 00:15:10.946 3021.938 - 3034.074: 99.8908% ( 1) 00:15:10.946 3980.705 - 4004.978: 99.9412% ( 6) 00:15:10.946 4004.978 - 4029.250: 100.0000% ( 7) 00:15:10.946 00:15:10.946 Complete histogram 00:15:10.946 ================== 00:15:10.946 Range in us Cumulative Count 00:15:10.946 2.643 - 2.655: 0.0756% ( 9) 00:15:10.946 2.655 - 2.667: 13.9701% ( 1654) 00:15:10.946 2.667 - 2.679: 59.3834% ( 5406) 00:15:10.946 2.679 - 2.690: 70.3209% ( 1302) 00:15:10.946 2.690 - 2.702: 75.3024% ( 593) 00:15:10.946 2.702 - 2.714: 84.8874% ( 1141) 00:15:10.946 2.714 - 2.726: 91.2802% ( 761) 00:15:10.946 2.726 - 2.738: 95.2033% ( 467) 00:15:10.946 2.738 - 2.750: 96.2786% ( 128) 00:15:10.946 2.750 - 2.761: 96.7154% ( 52) 00:15:10.946 2.761 - 2.773: 97.1270% ( 49) 00:15:10.946 2.773 - 2.785: 97.4462% ( 38) 00:15:10.946 2.785 - 2.7[2024-07-23 10:35:59.386728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.946 97: 97.7571% ( 37) 00:15:10.946 2.797 - 2.809: 98.0007% ( 29) 00:15:10.946 2.809 - 2.821: 98.1519% ( 18) 00:15:10.946 2.821 - 2.833: 98.2023% ( 6) 00:15:10.946 2.833 - 2.844: 98.2359% ( 4) 00:15:10.946 2.844 - 2.856: 98.2443% ( 1) 00:15:10.946 2.856 - 2.868: 98.2611% ( 2) 00:15:10.946 2.868 - 2.880: 98.2779% ( 2) 00:15:10.946 2.880 - 2.892: 98.2947% ( 2) 00:15:10.946 2.904 - 2.916: 98.3031% ( 1) 00:15:10.946 2.916 - 2.927: 98.3115% ( 1) 00:15:10.946 2.927 - 2.939: 98.3619% ( 6) 00:15:10.946 2.939 - 2.951: 98.3703% ( 1) 00:15:10.946 2.951 - 2.963: 98.3955% ( 3) 00:15:10.946 2.987 - 2.999: 98.4039% ( 1) 00:15:10.946 3.022 - 3.034: 98.4207% ( 2) 00:15:10.946 3.034 - 3.058: 98.4375% ( 2) 00:15:10.946 3.081 - 3.105: 98.4543% ( 2) 00:15:10.946 3.105 - 3.129: 98.4711% ( 2) 00:15:10.946 3.129 - 3.153: 98.5047% ( 4) 00:15:10.946 3.153 - 3.176: 98.5299% ( 3) 00:15:10.946 3.176 - 3.200: 98.5635% ( 4) 00:15:10.946 3.224 - 3.247: 98.5719% ( 1) 00:15:10.946 3.247 - 3.271: 98.5887% ( 2) 00:15:10.946 3.271 - 3.295: 98.6055% ( 2) 00:15:10.946 3.295 - 3.319: 98.6223% ( 2) 00:15:10.946 3.342 - 3.366: 98.6307% ( 1) 00:15:10.946 3.366 - 3.390: 98.6391% ( 1) 00:15:10.946 3.390 - 3.413: 98.6643% ( 3) 00:15:10.946 3.413 - 3.437: 98.6895% ( 3) 00:15:10.946 3.437 - 3.461: 98.7063% ( 2) 00:15:10.946 3.461 - 3.484: 98.7483% ( 5) 00:15:10.946 3.484 - 3.508: 98.7987% ( 6) 00:15:10.946 3.508 - 3.532: 98.8491% ( 6) 00:15:10.946 3.532 - 3.556: 98.8911% ( 5) 00:15:10.946 3.579 - 3.603: 98.9163% ( 3) 00:15:10.946 3.627 - 3.650: 98.9247% ( 1) 00:15:10.946 3.650 - 3.674: 98.9415% ( 2) 00:15:10.946 3.721 - 3.745: 98.9583% ( 2) 00:15:10.946 3.745 - 3.769: 98.9751% ( 2) 00:15:10.946 3.959 - 3.982: 98.9835% ( 1) 00:15:10.946 4.030 - 4.053: 98.9919% ( 1) 00:15:10.946 4.124 - 4.148: 99.0003% ( 1) 00:15:10.946 4.196 - 4.219: 99.0087% ( 1) 00:15:10.946 4.243 - 4.267: 99.0171% ( 1) 00:15:10.946 4.267 - 4.290: 99.0255% ( 1) 00:15:10.946 4.314 - 4.338: 99.0339% ( 1) 00:15:10.946 4.338 - 4.361: 99.0423% ( 1) 00:15:10.946 4.527 - 4.551: 99.0507% ( 1) 00:15:10.946 5.073 - 5.096: 99.0675% ( 2) 00:15:10.947 5.452 - 5.476: 99.0759% ( 1) 00:15:10.947 5.902 - 5.926: 99.0843% ( 1) 00:15:10.947 5.926 - 5.950: 99.0927% ( 1) 00:15:10.947 5.973 - 5.997: 99.1011% ( 1) 00:15:10.947 6.021 - 6.044: 99.1095% ( 1) 00:15:10.947 6.116 - 6.163: 99.1263% ( 2) 00:15:10.947 6.258 - 6.305: 99.1431% ( 2) 00:15:10.947 6.590 - 6.637: 99.1515% ( 1) 00:15:10.947 6.684 - 6.732: 99.1683% ( 2) 00:15:10.947 6.874 - 6.921: 99.1767% ( 1) 00:15:10.947 7.443 - 7.490: 99.1851% ( 1) 00:15:10.947 7.680 - 7.727: 99.1935% ( 1) 00:15:10.947 7.727 - 7.775: 99.2019% ( 1) 00:15:10.947 7.822 - 7.870: 99.2103% ( 1) 00:15:10.947 8.012 - 8.059: 99.2188% ( 1) 00:15:10.947 8.201 - 8.249: 99.2356% ( 2) 00:15:10.947 8.344 - 8.391: 99.2440% ( 1) 00:15:10.947 9.150 - 9.197: 99.2524% ( 1) 00:15:10.947 9.339 - 9.387: 99.2608% ( 1) 00:15:10.947 9.481 - 9.529: 99.2692% ( 1) 00:15:10.947 9.861 - 9.908: 99.2776% ( 1) 00:15:10.947 10.335 - 10.382: 99.2860% ( 1) 00:15:10.947 13.084 - 13.179: 99.2944% ( 1) 00:15:10.947 14.696 - 14.791: 99.3028% ( 1) 00:15:10.947 14.886 - 14.981: 99.3112% ( 1) 00:15:10.947 14.981 - 15.076: 99.3196% ( 1) 00:15:10.947 17.636 - 17.730: 99.3280% ( 1) 00:15:10.947 19.153 - 19.247: 99.3364% ( 1) 00:15:10.947 29.393 - 29.582: 99.3448% ( 1) 00:15:10.947 2148.124 - 2160.261: 99.3532% ( 1) 00:15:10.947 3009.801 - 3021.938: 99.3616% ( 1) 00:15:10.947 3980.705 - 4004.978: 99.7396% ( 45) 00:15:10.947 4004.978 - 4029.250: 99.9748% ( 28) 00:15:10.947 4029.250 - 4053.523: 99.9832% ( 1) 00:15:10.947 5000.154 - 5024.427: 99.9916% ( 1) 00:15:10.947 5995.330 - 6019.603: 100.0000% ( 1) 00:15:10.947 00:15:10.947 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:10.947 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:10.947 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:10.947 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:10.947 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.515 [ 00:15:11.515 { 00:15:11.515 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.515 "subtype": "Discovery", 00:15:11.516 "listen_addresses": [], 00:15:11.516 "allow_any_host": true, 00:15:11.516 "hosts": [] 00:15:11.516 }, 00:15:11.516 { 00:15:11.516 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.516 "subtype": "NVMe", 00:15:11.516 "listen_addresses": [ 00:15:11.516 { 00:15:11.516 "trtype": "VFIOUSER", 00:15:11.516 "adrfam": "IPv4", 00:15:11.516 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.516 "trsvcid": "0" 00:15:11.516 } 00:15:11.516 ], 00:15:11.516 "allow_any_host": true, 00:15:11.516 "hosts": [], 00:15:11.516 "serial_number": "SPDK1", 00:15:11.516 "model_number": "SPDK bdev Controller", 00:15:11.516 "max_namespaces": 32, 00:15:11.516 "min_cntlid": 1, 00:15:11.516 "max_cntlid": 65519, 00:15:11.516 "namespaces": [ 00:15:11.516 { 00:15:11.516 "nsid": 1, 00:15:11.516 "bdev_name": "Malloc1", 00:15:11.516 "name": "Malloc1", 00:15:11.516 "nguid": "6C80574F54C042E8B0354789303FBEA2", 00:15:11.516 "uuid": "6c80574f-54c0-42e8-b035-4789303fbea2" 00:15:11.516 }, 00:15:11.516 { 00:15:11.516 "nsid": 2, 00:15:11.516 "bdev_name": "Malloc3", 00:15:11.516 "name": "Malloc3", 00:15:11.516 "nguid": "5AAA2EF4A9F54969AE8B04F474A90B4B", 00:15:11.516 "uuid": "5aaa2ef4-a9f5-4969-ae8b-04f474a90b4b" 00:15:11.516 } 00:15:11.516 ] 00:15:11.516 }, 00:15:11.516 { 00:15:11.516 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.516 "subtype": "NVMe", 00:15:11.516 "listen_addresses": [ 00:15:11.516 { 00:15:11.516 "trtype": "VFIOUSER", 00:15:11.516 "adrfam": "IPv4", 00:15:11.516 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.516 "trsvcid": "0" 00:15:11.516 } 00:15:11.516 ], 00:15:11.516 "allow_any_host": true, 00:15:11.516 "hosts": [], 00:15:11.516 "serial_number": "SPDK2", 00:15:11.516 "model_number": "SPDK bdev Controller", 00:15:11.516 "max_namespaces": 32, 00:15:11.516 "min_cntlid": 1, 00:15:11.516 "max_cntlid": 65519, 00:15:11.516 "namespaces": [ 00:15:11.516 { 00:15:11.516 "nsid": 1, 00:15:11.516 "bdev_name": "Malloc2", 00:15:11.516 "name": "Malloc2", 00:15:11.516 "nguid": "972AFCCB72624AE2B2F465C07DD71002", 00:15:11.516 "uuid": "972afccb-7262-4ae2-b2f4-65c07dd71002" 00:15:11.516 } 00:15:11.516 ] 00:15:11.516 } 00:15:11.516 ] 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3797750 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.516 10:35:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:11.516 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.516 [2024-07-23 10:35:59.891145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.775 Malloc4 00:15:11.775 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:12.034 [2024-07-23 10:36:00.348774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.034 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.034 Asynchronous Event Request test 00:15:12.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.034 Registering asynchronous event callbacks... 00:15:12.034 Starting namespace attribute notice tests for all controllers... 00:15:12.034 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:12.034 aer_cb - Changed Namespace 00:15:12.034 Cleaning up... 00:15:12.293 [ 00:15:12.293 { 00:15:12.293 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.293 "subtype": "Discovery", 00:15:12.293 "listen_addresses": [], 00:15:12.293 "allow_any_host": true, 00:15:12.293 "hosts": [] 00:15:12.293 }, 00:15:12.293 { 00:15:12.293 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.293 "subtype": "NVMe", 00:15:12.293 "listen_addresses": [ 00:15:12.293 { 00:15:12.293 "trtype": "VFIOUSER", 00:15:12.293 "adrfam": "IPv4", 00:15:12.293 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.293 "trsvcid": "0" 00:15:12.293 } 00:15:12.293 ], 00:15:12.293 "allow_any_host": true, 00:15:12.293 "hosts": [], 00:15:12.293 "serial_number": "SPDK1", 00:15:12.293 "model_number": "SPDK bdev Controller", 00:15:12.293 "max_namespaces": 32, 00:15:12.293 "min_cntlid": 1, 00:15:12.293 "max_cntlid": 65519, 00:15:12.293 "namespaces": [ 00:15:12.293 { 00:15:12.293 "nsid": 1, 00:15:12.293 "bdev_name": "Malloc1", 00:15:12.293 "name": "Malloc1", 00:15:12.293 "nguid": "6C80574F54C042E8B0354789303FBEA2", 00:15:12.293 "uuid": "6c80574f-54c0-42e8-b035-4789303fbea2" 00:15:12.293 }, 00:15:12.293 { 00:15:12.293 "nsid": 2, 00:15:12.293 "bdev_name": "Malloc3", 00:15:12.293 "name": "Malloc3", 00:15:12.293 "nguid": "5AAA2EF4A9F54969AE8B04F474A90B4B", 00:15:12.293 "uuid": "5aaa2ef4-a9f5-4969-ae8b-04f474a90b4b" 00:15:12.293 } 00:15:12.293 ] 00:15:12.293 }, 00:15:12.293 { 00:15:12.293 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.293 "subtype": "NVMe", 00:15:12.293 "listen_addresses": [ 00:15:12.293 { 00:15:12.293 "trtype": "VFIOUSER", 00:15:12.293 "adrfam": "IPv4", 00:15:12.293 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.293 "trsvcid": "0" 00:15:12.293 } 00:15:12.293 ], 00:15:12.293 "allow_any_host": true, 00:15:12.293 "hosts": [], 00:15:12.293 "serial_number": "SPDK2", 00:15:12.293 "model_number": "SPDK bdev Controller", 00:15:12.293 "max_namespaces": 32, 00:15:12.293 "min_cntlid": 1, 00:15:12.293 "max_cntlid": 65519, 00:15:12.293 "namespaces": [ 00:15:12.293 { 00:15:12.293 "nsid": 1, 00:15:12.293 "bdev_name": "Malloc2", 00:15:12.293 "name": "Malloc2", 00:15:12.293 "nguid": "972AFCCB72624AE2B2F465C07DD71002", 00:15:12.293 "uuid": "972afccb-7262-4ae2-b2f4-65c07dd71002" 00:15:12.293 }, 00:15:12.293 { 00:15:12.293 "nsid": 2, 00:15:12.293 "bdev_name": "Malloc4", 00:15:12.293 "name": "Malloc4", 00:15:12.293 "nguid": "715DB362562945F18A76CCE14C1EDB46", 00:15:12.293 "uuid": "715db362-5629-45f1-8a76-cce14c1edb46" 00:15:12.293 } 00:15:12.293 ] 00:15:12.293 } 00:15:12.293 ] 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3797750 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3793314 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3793314 ']' 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3793314 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3793314 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3793314' 00:15:12.293 killing process with pid 3793314 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3793314 00:15:12.293 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3793314 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3797911 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3797911' 00:15:12.552 Process pid: 3797911 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3797911 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3797911 ']' 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.552 10:36:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:12.552 [2024-07-23 10:36:00.957022] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:12.552 [2024-07-23 10:36:00.958264] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:12.552 [2024-07-23 10:36:00.958334] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.552 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.552 [2024-07-23 10:36:01.023216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.810 [2024-07-23 10:36:01.113240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.810 [2024-07-23 10:36:01.113304] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.810 [2024-07-23 10:36:01.113320] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.810 [2024-07-23 10:36:01.113334] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.810 [2024-07-23 10:36:01.113345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.810 [2024-07-23 10:36:01.113426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.811 [2024-07-23 10:36:01.113505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.811 [2024-07-23 10:36:01.113454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.811 [2024-07-23 10:36:01.113549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.811 [2024-07-23 10:36:01.202015] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:12.811 [2024-07-23 10:36:01.202223] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:12.811 [2024-07-23 10:36:01.202475] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:12.811 [2024-07-23 10:36:01.202976] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:12.811 [2024-07-23 10:36:01.203242] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:12.811 10:36:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.811 10:36:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:12.811 10:36:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:13.745 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:14.315 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:14.315 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:14.315 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.315 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:14.315 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:14.617 Malloc1 00:15:14.617 10:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:14.929 10:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.187 10:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:15.445 10:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.445 10:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:15.445 10:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.703 Malloc2 00:15:15.703 10:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:15.961 10:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:16.219 10:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3797911 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3797911 ']' 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3797911 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3797911 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3797911' 00:15:16.477 killing process with pid 3797911 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3797911 00:15:16.477 10:36:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3797911 00:15:16.736 10:36:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:16.736 10:36:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:16.736 00:15:16.736 real 0m53.957s 00:15:16.736 user 3m33.697s 00:15:16.736 sys 0m4.408s 00:15:16.736 10:36:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.736 10:36:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:16.736 ************************************ 00:15:16.736 END TEST nvmf_vfio_user 00:15:16.736 ************************************ 00:15:16.736 10:36:05 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.736 10:36:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:16.736 10:36:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.736 10:36:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.736 ************************************ 00:15:16.736 START TEST nvmf_vfio_user_nvme_compliance 00:15:16.736 ************************************ 00:15:16.736 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.995 * Looking for test storage... 00:15:16.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3798449 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3798449' 00:15:16.996 Process pid: 3798449 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3798449 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3798449 ']' 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.996 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.996 [2024-07-23 10:36:05.338805] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:16.996 [2024-07-23 10:36:05.338902] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.996 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.996 [2024-07-23 10:36:05.402889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.996 [2024-07-23 10:36:05.489902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.996 [2024-07-23 10:36:05.489968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.996 [2024-07-23 10:36:05.489984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.996 [2024-07-23 10:36:05.489997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.996 [2024-07-23 10:36:05.490008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.996 [2024-07-23 10:36:05.490072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.996 [2024-07-23 10:36:05.490123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.996 [2024-07-23 10:36:05.490127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.255 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.255 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:17.255 10:36:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.190 malloc0 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.190 10:36:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:18.448 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.448 00:15:18.448 00:15:18.448 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.448 http://cunit.sourceforge.net/ 00:15:18.448 00:15:18.448 00:15:18.448 Suite: nvme_compliance 00:15:18.448 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 10:36:06.827096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.448 [2024-07-23 10:36:06.830971] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:18.448 [2024-07-23 10:36:06.830999] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:18.448 [2024-07-23 10:36:06.831013] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:18.448 [2024-07-23 10:36:06.832130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.448 passed 00:15:18.448 Test: admin_identify_ctrlr_verify_fused ...[2024-07-23 10:36:06.930857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.448 [2024-07-23 10:36:06.933880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.706 passed 00:15:18.706 Test: admin_identify_ns ...[2024-07-23 10:36:07.036519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.706 [2024-07-23 10:36:07.098504] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:18.706 [2024-07-23 10:36:07.106519] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:18.706 [2024-07-23 10:36:07.127650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.706 passed 00:15:18.963 Test: admin_get_features_mandatory_features ...[2024-07-23 10:36:07.223779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.963 [2024-07-23 10:36:07.226795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.963 passed 00:15:18.963 Test: admin_get_features_optional_features ...[2024-07-23 10:36:07.325445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.963 [2024-07-23 10:36:07.328473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.963 passed 00:15:18.963 Test: admin_set_features_number_of_queues ...[2024-07-23 10:36:07.426668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.221 [2024-07-23 10:36:07.533659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.221 passed 00:15:19.221 Test: admin_get_log_page_mandatory_logs ...[2024-07-23 10:36:07.628771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.221 [2024-07-23 10:36:07.634819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.221 passed 00:15:19.479 Test: admin_get_log_page_with_lpo ...[2024-07-23 10:36:07.729870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.479 [2024-07-23 10:36:07.801497] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:19.479 [2024-07-23 10:36:07.814593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.479 passed 00:15:19.479 Test: fabric_property_get ...[2024-07-23 10:36:07.909694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.479 [2024-07-23 10:36:07.911012] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:19.479 [2024-07-23 10:36:07.912720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.479 passed 00:15:19.737 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 10:36:08.010372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.737 [2024-07-23 10:36:08.011692] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:19.737 [2024-07-23 10:36:08.013378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.737 passed 00:15:19.737 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 10:36:08.112541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.737 [2024-07-23 10:36:08.198490] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:19.737 [2024-07-23 10:36:08.214505] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:19.737 [2024-07-23 10:36:08.219622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.995 passed 00:15:19.995 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 10:36:08.314275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.995 [2024-07-23 10:36:08.315621] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:19.995 [2024-07-23 10:36:08.320319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.995 passed 00:15:19.995 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 10:36:08.415510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.995 [2024-07-23 10:36:08.495495] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:20.252 [2024-07-23 10:36:08.519497] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.252 [2024-07-23 10:36:08.524652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.252 passed 00:15:20.252 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 10:36:08.621935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.252 [2024-07-23 10:36:08.623286] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:20.252 [2024-07-23 10:36:08.623329] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:20.252 [2024-07-23 10:36:08.624960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.252 passed 00:15:20.252 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 10:36:08.720520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.510 [2024-07-23 10:36:08.814491] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:20.510 [2024-07-23 10:36:08.822494] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:20.510 [2024-07-23 10:36:08.830489] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:20.510 [2024-07-23 10:36:08.838495] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:20.510 [2024-07-23 10:36:08.867628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.510 passed 00:15:20.510 Test: admin_create_io_sq_verify_pc ...[2024-07-23 10:36:08.962734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.510 [2024-07-23 10:36:08.979507] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:20.510 [2024-07-23 10:36:08.997236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.768 passed 00:15:20.768 Test: admin_create_io_qp_max_qps ...[2024-07-23 10:36:09.095901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.701 [2024-07-23 10:36:10.188506] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:22.267 [2024-07-23 10:36:10.578565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.267 passed 00:15:22.267 Test: admin_create_io_sq_shared_cq ...[2024-07-23 10:36:10.675830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.525 [2024-07-23 10:36:10.808496] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:22.525 [2024-07-23 10:36:10.845587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.525 passed 00:15:22.525 00:15:22.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.525 suites 1 1 n/a 0 0 00:15:22.525 tests 18 18 18 0 0 00:15:22.525 asserts 360 360 360 0 n/a 00:15:22.525 00:15:22.525 Elapsed time = 1.692 seconds 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3798449 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3798449 ']' 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3798449 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3798449 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3798449' 00:15:22.525 killing process with pid 3798449 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3798449 00:15:22.525 10:36:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3798449 00:15:22.783 10:36:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:22.783 10:36:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:22.783 00:15:22.783 real 0m5.896s 00:15:22.783 user 0m16.649s 00:15:22.783 sys 0m0.544s 00:15:22.783 10:36:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:22.783 10:36:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.783 ************************************ 00:15:22.783 END TEST nvmf_vfio_user_nvme_compliance 00:15:22.783 ************************************ 00:15:22.783 10:36:11 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:22.783 10:36:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:22.783 10:36:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:22.783 10:36:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.783 ************************************ 00:15:22.783 START TEST nvmf_vfio_user_fuzz 00:15:22.783 ************************************ 00:15:22.783 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:22.783 * Looking for test storage... 00:15:22.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3799528 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3799528' 00:15:22.784 Process pid: 3799528 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3799528 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3799528 ']' 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.784 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.042 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:23.042 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:23.042 10:36:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 malloc0 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:24.416 10:36:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:56.527 Fuzzing completed. Shutting down the fuzz application 00:15:56.527 00:15:56.527 Dumping successful admin opcodes: 00:15:56.527 8, 9, 10, 24, 00:15:56.527 Dumping successful io opcodes: 00:15:56.527 0, 00:15:56.527 NS: 0x200003a1ef00 I/O qp, Total commands completed: 566688, total successful commands: 2179, random_seed: 774965824 00:15:56.527 NS: 0x200003a1ef00 admin qp, Total commands completed: 90251, total successful commands: 726, random_seed: 40868032 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3799528 ']' 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3799528' 00:15:56.527 killing process with pid 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3799528 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:56.527 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:56.527 00:15:56.527 real 0m32.108s 00:15:56.527 user 0m32.030s 00:15:56.528 sys 0m27.375s 00:15:56.528 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.528 10:36:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.528 ************************************ 00:15:56.528 END TEST nvmf_vfio_user_fuzz 00:15:56.528 ************************************ 00:15:56.528 10:36:43 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:56.528 10:36:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:56.528 10:36:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.528 10:36:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.528 ************************************ 00:15:56.528 START TEST nvmf_host_management 00:15:56.528 ************************************ 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:56.528 * Looking for test storage... 00:15:56.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:56.528 10:36:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:56.789 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:56.789 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:56.789 Found net devices under 0000:08:00.0: cvl_0_0 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:56.789 Found net devices under 0000:08:00.1: cvl_0_1 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:15:56.789 00:15:56.789 --- 10.0.0.2 ping statistics --- 00:15:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.789 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:15:56.789 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:15:56.789 00:15:56.789 --- 10.0.0.1 ping statistics --- 00:15:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.789 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3803757 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3803757 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3803757 ']' 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.790 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 [2024-07-23 10:36:45.288290] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:56.790 [2024-07-23 10:36:45.288386] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.050 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.050 [2024-07-23 10:36:45.356077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.050 [2024-07-23 10:36:45.448922] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.050 [2024-07-23 10:36:45.448987] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.050 [2024-07-23 10:36:45.449003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.050 [2024-07-23 10:36:45.449016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.050 [2024-07-23 10:36:45.449027] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.050 [2024-07-23 10:36:45.449094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.050 [2024-07-23 10:36:45.449149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.050 [2024-07-23 10:36:45.449199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:57.050 [2024-07-23 10:36:45.449202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 [2024-07-23 10:36:45.606182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 Malloc0 00:15:57.308 [2024-07-23 10:36:45.668713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3803807 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3803807 /var/tmp/bdevperf.sock 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3803807 ']' 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.308 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:57.309 { 00:15:57.309 "params": { 00:15:57.309 "name": "Nvme$subsystem", 00:15:57.309 "trtype": "$TEST_TRANSPORT", 00:15:57.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.309 "adrfam": "ipv4", 00:15:57.309 "trsvcid": "$NVMF_PORT", 00:15:57.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.309 "hdgst": ${hdgst:-false}, 00:15:57.309 "ddgst": ${ddgst:-false} 00:15:57.309 }, 00:15:57.309 "method": "bdev_nvme_attach_controller" 00:15:57.309 } 00:15:57.309 EOF 00:15:57.309 )") 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:57.309 10:36:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:57.309 "params": { 00:15:57.309 "name": "Nvme0", 00:15:57.309 "trtype": "tcp", 00:15:57.309 "traddr": "10.0.0.2", 00:15:57.309 "adrfam": "ipv4", 00:15:57.309 "trsvcid": "4420", 00:15:57.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:57.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:57.309 "hdgst": false, 00:15:57.309 "ddgst": false 00:15:57.309 }, 00:15:57.309 "method": "bdev_nvme_attach_controller" 00:15:57.309 }' 00:15:57.309 [2024-07-23 10:36:45.751167] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:57.309 [2024-07-23 10:36:45.751254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803807 ] 00:15:57.309 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.567 [2024-07-23 10:36:45.812468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.567 [2024-07-23 10:36:45.900665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.825 Running I/O for 10 seconds... 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:57.825 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.084 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.344 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.344 [2024-07-23 10:36:46.633798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.345 [2024-07-23 10:36:46.633900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.633920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.345 [2024-07-23 10:36:46.633936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.633951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.345 [2024-07-23 10:36:46.633966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.633981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.345 [2024-07-23 10:36:46.633996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.634010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560a70 is same with the state(5) to be set 00:15:58.345 [2024-07-23 10:36:46.636088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.345 [2024-07-23 10:36:46.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:58.345 [2024-07-23 10:36:46.636509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.345 [2024-07-23 10:36:46.636657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.345 [2024-07-23 10:36:46.636783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.636965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.636984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.345 [2024-07-23 10:36:46.637285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.345 [2024-07-23 10:36:46.637304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.637965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.637982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:58.346 [2024-07-23 10:36:46.638410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.346 [2024-07-23 10:36:46.638427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155ada0 is same with the state(5) to be set 00:15:58.346 [2024-07-23 10:36:46.638500] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155ada0 was disconnected and freed. reset controller. 00:15:58.346 [2024-07-23 10:36:46.639856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:58.346 task offset: 81792 on job bdev=Nvme0n1 fails 00:15:58.346 00:15:58.346 Latency(us) 00:15:58.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.346 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:58.346 Job: Nvme0n1 ended in about 0.42 seconds with error 00:15:58.346 Verification LBA range: start 0x0 length 0x400 00:15:58.346 Nvme0n1 : 0.42 1384.51 86.53 153.83 0.00 40177.10 7524.50 38836.15 00:15:58.346 =================================================================================================================== 00:15:58.346 Total : 1384.51 86.53 153.83 0.00 40177.10 7524.50 38836.15 00:15:58.346 [2024-07-23 10:36:46.642314] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.346 [2024-07-23 10:36:46.642348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1560a70 (9): Bad file descriptor 00:15:58.346 10:36:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.346 10:36:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:58.346 [2024-07-23 10:36:46.784619] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3803807 00:15:59.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3803807) - No such process 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:59.281 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:59.281 { 00:15:59.281 "params": { 00:15:59.281 "name": "Nvme$subsystem", 00:15:59.281 "trtype": "$TEST_TRANSPORT", 00:15:59.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.282 "adrfam": "ipv4", 00:15:59.282 "trsvcid": "$NVMF_PORT", 00:15:59.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.282 "hdgst": ${hdgst:-false}, 00:15:59.282 "ddgst": ${ddgst:-false} 00:15:59.282 }, 00:15:59.282 "method": "bdev_nvme_attach_controller" 00:15:59.282 } 00:15:59.282 EOF 00:15:59.282 )") 00:15:59.282 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:59.282 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:59.282 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:59.282 10:36:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:59.282 "params": { 00:15:59.282 "name": "Nvme0", 00:15:59.282 "trtype": "tcp", 00:15:59.282 "traddr": "10.0.0.2", 00:15:59.282 "adrfam": "ipv4", 00:15:59.282 "trsvcid": "4420", 00:15:59.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:59.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:59.282 "hdgst": false, 00:15:59.282 "ddgst": false 00:15:59.282 }, 00:15:59.282 "method": "bdev_nvme_attach_controller" 00:15:59.282 }' 00:15:59.282 [2024-07-23 10:36:47.693711] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:59.282 [2024-07-23 10:36:47.693807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804022 ] 00:15:59.282 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.282 [2024-07-23 10:36:47.754500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.540 [2024-07-23 10:36:47.842814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.798 Running I/O for 1 seconds... 00:16:00.732 00:16:00.732 Latency(us) 00:16:00.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.732 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:00.732 Verification LBA range: start 0x0 length 0x400 00:16:00.732 Nvme0n1 : 1.04 1469.48 91.84 0.00 0.00 42629.10 4271.98 39030.33 00:16:00.732 =================================================================================================================== 00:16:00.732 Total : 1469.48 91.84 0.00 0.00 42629.10 4271.98 39030.33 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.991 rmmod nvme_tcp 00:16:00.991 rmmod nvme_fabrics 00:16:00.991 rmmod nvme_keyring 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3803757 ']' 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3803757 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3803757 ']' 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3803757 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3803757 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3803757' 00:16:00.991 killing process with pid 3803757 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3803757 00:16:00.991 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3803757 00:16:01.251 [2024-07-23 10:36:49.631519] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.251 10:36:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.218 10:36:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.218 10:36:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:03.218 00:16:03.218 real 0m8.384s 00:16:03.218 user 0m20.011s 00:16:03.218 sys 0m2.408s 00:16:03.218 10:36:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:03.218 10:36:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:03.218 ************************************ 00:16:03.218 END TEST nvmf_host_management 00:16:03.218 ************************************ 00:16:03.476 10:36:51 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:03.476 10:36:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:03.476 10:36:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:03.476 10:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.476 ************************************ 00:16:03.476 START TEST nvmf_lvol 00:16:03.476 ************************************ 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:03.476 * Looking for test storage... 00:16:03.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.476 10:36:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:05.381 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:05.381 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:05.381 Found net devices under 0000:08:00.0: cvl_0_0 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:05.381 Found net devices under 0000:08:00.1: cvl_0_1 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:05.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:16:05.381 00:16:05.381 --- 10.0.0.2 ping statistics --- 00:16:05.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.381 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:16:05.381 00:16:05.381 --- 10.0.0.1 ping statistics --- 00:16:05.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.381 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.381 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3805635 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3805635 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3805635 ']' 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:05.382 10:36:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:05.382 [2024-07-23 10:36:53.671512] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:05.382 [2024-07-23 10:36:53.671606] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.382 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.382 [2024-07-23 10:36:53.751562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.382 [2024-07-23 10:36:53.856056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.382 [2024-07-23 10:36:53.856136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.382 [2024-07-23 10:36:53.856165] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.382 [2024-07-23 10:36:53.856191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.382 [2024-07-23 10:36:53.856214] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.382 [2024-07-23 10:36:53.856318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.382 [2024-07-23 10:36:53.856388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.382 [2024-07-23 10:36:53.856378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.640 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.898 [2024-07-23 10:36:54.335090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.898 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.464 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:06.464 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.723 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:06.723 10:36:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:06.981 10:36:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:07.239 10:36:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=36cb50db-f574-48a2-9462-9fa1903c267c 00:16:07.239 10:36:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36cb50db-f574-48a2-9462-9fa1903c267c lvol 20 00:16:07.497 10:36:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=230f0653-a397-4373-9514-eb8919eb5514 00:16:07.497 10:36:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:07.755 10:36:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 230f0653-a397-4373-9514-eb8919eb5514 00:16:08.013 10:36:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:08.271 [2024-07-23 10:36:56.761421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.680 10:36:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:08.681 10:36:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3805965 00:16:08.681 10:36:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:08.681 10:36:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:08.681 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.615 10:36:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 230f0653-a397-4373-9514-eb8919eb5514 MY_SNAPSHOT 00:16:10.180 10:36:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0c808a9c-3fe7-4956-93b7-44656ab18a87 00:16:10.180 10:36:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 230f0653-a397-4373-9514-eb8919eb5514 30 00:16:10.437 10:36:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0c808a9c-3fe7-4956-93b7-44656ab18a87 MY_CLONE 00:16:10.695 10:36:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=297c1599-ec7c-4d67-bda2-9b297f49653d 00:16:10.695 10:36:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 297c1599-ec7c-4d67-bda2-9b297f49653d 00:16:11.261 10:36:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3805965 00:16:19.430 Initializing NVMe Controllers 00:16:19.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:19.431 Controller IO queue size 128, less than required. 00:16:19.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:19.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:19.431 Initialization complete. Launching workers. 00:16:19.431 ======================================================== 00:16:19.431 Latency(us) 00:16:19.431 Device Information : IOPS MiB/s Average min max 00:16:19.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9401.90 36.73 13619.01 2577.21 77987.09 00:16:19.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9553.00 37.32 13411.29 1978.35 74692.58 00:16:19.431 ======================================================== 00:16:19.431 Total : 18954.90 74.04 13514.33 1978.35 77987.09 00:16:19.431 00:16:19.431 10:37:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:19.431 10:37:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 230f0653-a397-4373-9514-eb8919eb5514 00:16:19.431 10:37:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36cb50db-f574-48a2-9462-9fa1903c267c 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.689 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.689 rmmod nvme_tcp 00:16:19.689 rmmod nvme_fabrics 00:16:19.689 rmmod nvme_keyring 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3805635 ']' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3805635 ']' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3805635' 00:16:19.948 killing process with pid 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3805635 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.948 10:37:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.492 00:16:22.492 real 0m18.707s 00:16:22.492 user 1m5.115s 00:16:22.492 sys 0m5.445s 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:22.492 ************************************ 00:16:22.492 END TEST nvmf_lvol 00:16:22.492 ************************************ 00:16:22.492 10:37:10 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:22.492 10:37:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:22.492 10:37:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:22.492 10:37:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.492 ************************************ 00:16:22.492 START TEST nvmf_lvs_grow 00:16:22.492 ************************************ 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:22.492 * Looking for test storage... 00:16:22.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.492 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.493 10:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:23.876 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:23.876 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:23.876 Found net devices under 0000:08:00.0: cvl_0_0 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:23.876 Found net devices under 0000:08:00.1: cvl_0_1 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.876 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:16:23.877 00:16:23.877 --- 10.0.0.2 ping statistics --- 00:16:23.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.877 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:23.877 00:16:23.877 --- 10.0.0.1 ping statistics --- 00:16:23.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.877 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.877 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3808478 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3808478 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3808478 ']' 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:24.136 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.136 [2024-07-23 10:37:12.453524] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:24.136 [2024-07-23 10:37:12.453622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.136 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.136 [2024-07-23 10:37:12.517468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.136 [2024-07-23 10:37:12.604078] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.136 [2024-07-23 10:37:12.604138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.136 [2024-07-23 10:37:12.604154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.136 [2024-07-23 10:37:12.604172] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.136 [2024-07-23 10:37:12.604184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.136 [2024-07-23 10:37:12.604213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.395 10:37:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.654 [2024-07-23 10:37:12.999886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:24.654 ************************************ 00:16:24.654 START TEST lvs_grow_clean 00:16:24.654 ************************************ 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.654 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:24.913 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:24.913 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:25.172 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:25.172 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:25.172 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:25.743 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:25.743 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:25.743 10:37:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3e015b9-b31b-495f-87e1-b102aec216ed lvol 150 00:16:26.001 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e40aaae-fe24-4157-b5e4-cd37188d8a00 00:16:26.001 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:26.001 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:26.260 [2024-07-23 10:37:14.526259] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:26.260 [2024-07-23 10:37:14.526346] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:26.260 true 00:16:26.260 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:26.260 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:26.518 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:26.518 10:37:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:26.777 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e40aaae-fe24-4157-b5e4-cd37188d8a00 00:16:27.036 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:27.294 [2024-07-23 10:37:15.657768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.294 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3808898 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3808898 /var/tmp/bdevperf.sock 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3808898 ']' 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.553 10:37:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:27.553 [2024-07-23 10:37:16.024554] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:27.553 [2024-07-23 10:37:16.024657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808898 ] 00:16:27.553 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.811 [2024-07-23 10:37:16.085821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.811 [2024-07-23 10:37:16.173420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.811 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.811 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:27.811 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:28.381 Nvme0n1 00:16:28.381 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:28.640 [ 00:16:28.640 { 00:16:28.640 "name": "Nvme0n1", 00:16:28.640 "aliases": [ 00:16:28.640 "5e40aaae-fe24-4157-b5e4-cd37188d8a00" 00:16:28.640 ], 00:16:28.640 "product_name": "NVMe disk", 00:16:28.640 "block_size": 4096, 00:16:28.640 "num_blocks": 38912, 00:16:28.640 "uuid": "5e40aaae-fe24-4157-b5e4-cd37188d8a00", 00:16:28.640 "assigned_rate_limits": { 00:16:28.640 "rw_ios_per_sec": 0, 00:16:28.640 "rw_mbytes_per_sec": 0, 00:16:28.640 "r_mbytes_per_sec": 0, 00:16:28.640 "w_mbytes_per_sec": 0 00:16:28.640 }, 00:16:28.640 "claimed": false, 00:16:28.640 "zoned": false, 00:16:28.640 "supported_io_types": { 00:16:28.640 "read": true, 00:16:28.640 "write": true, 00:16:28.640 "unmap": true, 00:16:28.640 "write_zeroes": true, 00:16:28.640 "flush": true, 00:16:28.640 "reset": true, 00:16:28.640 "compare": true, 00:16:28.640 "compare_and_write": true, 00:16:28.640 "abort": true, 00:16:28.640 "nvme_admin": true, 00:16:28.640 "nvme_io": true 00:16:28.640 }, 00:16:28.640 "memory_domains": [ 00:16:28.640 { 00:16:28.640 "dma_device_id": "system", 00:16:28.640 "dma_device_type": 1 00:16:28.640 } 00:16:28.640 ], 00:16:28.640 "driver_specific": { 00:16:28.640 "nvme": [ 00:16:28.640 { 00:16:28.640 "trid": { 00:16:28.640 "trtype": "TCP", 00:16:28.640 "adrfam": "IPv4", 00:16:28.640 "traddr": "10.0.0.2", 00:16:28.640 "trsvcid": "4420", 00:16:28.640 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:28.640 }, 00:16:28.640 "ctrlr_data": { 00:16:28.640 "cntlid": 1, 00:16:28.640 "vendor_id": "0x8086", 00:16:28.640 "model_number": "SPDK bdev Controller", 00:16:28.640 "serial_number": "SPDK0", 00:16:28.640 "firmware_revision": "24.05.1", 00:16:28.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:28.640 "oacs": { 00:16:28.640 "security": 0, 00:16:28.640 "format": 0, 00:16:28.640 "firmware": 0, 00:16:28.640 "ns_manage": 0 00:16:28.640 }, 00:16:28.640 "multi_ctrlr": true, 00:16:28.640 "ana_reporting": false 00:16:28.640 }, 00:16:28.640 "vs": { 00:16:28.640 "nvme_version": "1.3" 00:16:28.640 }, 00:16:28.640 "ns_data": { 00:16:28.640 "id": 1, 00:16:28.640 "can_share": true 00:16:28.640 } 00:16:28.640 } 00:16:28.640 ], 00:16:28.640 "mp_policy": "active_passive" 00:16:28.640 } 00:16:28.640 } 00:16:28.640 ] 00:16:28.640 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3809001 00:16:28.640 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:28.640 10:37:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:28.640 Running I/O for 10 seconds... 00:16:30.024 Latency(us) 00:16:30.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.024 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:16:30.024 =================================================================================================================== 00:16:30.024 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:16:30.024 00:16:30.593 10:37:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:30.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.593 Nvme0n1 : 2.00 13843.50 54.08 0.00 0.00 0.00 0.00 0.00 00:16:30.593 =================================================================================================================== 00:16:30.593 Total : 13843.50 54.08 0.00 0.00 0.00 0.00 0.00 00:16:30.593 00:16:30.852 true 00:16:30.852 10:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:30.852 10:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:31.112 10:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:31.112 10:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:31.112 10:37:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3809001 00:16:31.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.683 Nvme0n1 : 3.00 13822.67 53.99 0.00 0.00 0.00 0.00 0.00 00:16:31.683 =================================================================================================================== 00:16:31.683 Total : 13822.67 53.99 0.00 0.00 0.00 0.00 0.00 00:16:31.683 00:16:32.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.623 Nvme0n1 : 4.00 13892.00 54.27 0.00 0.00 0.00 0.00 0.00 00:16:32.623 =================================================================================================================== 00:16:32.623 Total : 13892.00 54.27 0.00 0.00 0.00 0.00 0.00 00:16:32.623 00:16:34.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.006 Nvme0n1 : 5.00 13924.60 54.39 0.00 0.00 0.00 0.00 0.00 00:16:34.006 =================================================================================================================== 00:16:34.006 Total : 13924.60 54.39 0.00 0.00 0.00 0.00 0.00 00:16:34.006 00:16:34.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.947 Nvme0n1 : 6.00 13974.50 54.59 0.00 0.00 0.00 0.00 0.00 00:16:34.947 =================================================================================================================== 00:16:34.947 Total : 13974.50 54.59 0.00 0.00 0.00 0.00 0.00 00:16:34.947 00:16:35.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.897 Nvme0n1 : 7.00 14015.43 54.75 0.00 0.00 0.00 0.00 0.00 00:16:35.897 =================================================================================================================== 00:16:35.897 Total : 14015.43 54.75 0.00 0.00 0.00 0.00 0.00 00:16:35.897 00:16:36.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.883 Nvme0n1 : 8.00 14041.50 54.85 0.00 0.00 0.00 0.00 0.00 00:16:36.883 =================================================================================================================== 00:16:36.883 Total : 14041.50 54.85 0.00 0.00 0.00 0.00 0.00 00:16:36.883 00:16:37.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.822 Nvme0n1 : 9.00 14068.89 54.96 0.00 0.00 0.00 0.00 0.00 00:16:37.822 =================================================================================================================== 00:16:37.822 Total : 14068.89 54.96 0.00 0.00 0.00 0.00 0.00 00:16:37.822 00:16:38.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.761 Nvme0n1 : 10.00 14090.70 55.04 0.00 0.00 0.00 0.00 0.00 00:16:38.761 =================================================================================================================== 00:16:38.761 Total : 14090.70 55.04 0.00 0.00 0.00 0.00 0.00 00:16:38.761 00:16:38.761 00:16:38.761 Latency(us) 00:16:38.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.761 Nvme0n1 : 10.01 14096.08 55.06 0.00 0.00 9075.02 2305.90 22039.51 00:16:38.761 =================================================================================================================== 00:16:38.761 Total : 14096.08 55.06 0.00 0.00 9075.02 2305.90 22039.51 00:16:38.761 0 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3808898 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3808898 ']' 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3808898 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808898 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808898' 00:16:38.761 killing process with pid 3808898 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3808898 00:16:38.761 Received shutdown signal, test time was about 10.000000 seconds 00:16:38.761 00:16:38.761 Latency(us) 00:16:38.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.761 =================================================================================================================== 00:16:38.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.761 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3808898 00:16:39.020 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.278 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:39.537 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:39.537 10:37:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:39.795 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:39.795 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:39.795 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:40.053 [2024-07-23 10:37:28.494536] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:40.053 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:40.053 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:40.053 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:40.053 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:40.054 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:40.311 request: 00:16:40.311 { 00:16:40.311 "uuid": "c3e015b9-b31b-495f-87e1-b102aec216ed", 00:16:40.311 "method": "bdev_lvol_get_lvstores", 00:16:40.311 "req_id": 1 00:16:40.311 } 00:16:40.311 Got JSON-RPC error response 00:16:40.311 response: 00:16:40.311 { 00:16:40.311 "code": -19, 00:16:40.311 "message": "No such device" 00:16:40.311 } 00:16:40.311 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:40.311 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:40.311 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:40.311 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:40.311 10:37:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:40.569 aio_bdev 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e40aaae-fe24-4157-b5e4-cd37188d8a00 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=5e40aaae-fe24-4157-b5e4-cd37188d8a00 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:40.569 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:40.827 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5e40aaae-fe24-4157-b5e4-cd37188d8a00 -t 2000 00:16:41.086 [ 00:16:41.086 { 00:16:41.086 "name": "5e40aaae-fe24-4157-b5e4-cd37188d8a00", 00:16:41.086 "aliases": [ 00:16:41.086 "lvs/lvol" 00:16:41.086 ], 00:16:41.086 "product_name": "Logical Volume", 00:16:41.086 "block_size": 4096, 00:16:41.086 "num_blocks": 38912, 00:16:41.086 "uuid": "5e40aaae-fe24-4157-b5e4-cd37188d8a00", 00:16:41.086 "assigned_rate_limits": { 00:16:41.086 "rw_ios_per_sec": 0, 00:16:41.086 "rw_mbytes_per_sec": 0, 00:16:41.086 "r_mbytes_per_sec": 0, 00:16:41.086 "w_mbytes_per_sec": 0 00:16:41.086 }, 00:16:41.086 "claimed": false, 00:16:41.086 "zoned": false, 00:16:41.086 "supported_io_types": { 00:16:41.086 "read": true, 00:16:41.086 "write": true, 00:16:41.086 "unmap": true, 00:16:41.086 "write_zeroes": true, 00:16:41.086 "flush": false, 00:16:41.086 "reset": true, 00:16:41.086 "compare": false, 00:16:41.086 "compare_and_write": false, 00:16:41.086 "abort": false, 00:16:41.086 "nvme_admin": false, 00:16:41.086 "nvme_io": false 00:16:41.086 }, 00:16:41.086 "driver_specific": { 00:16:41.086 "lvol": { 00:16:41.086 "lvol_store_uuid": "c3e015b9-b31b-495f-87e1-b102aec216ed", 00:16:41.086 "base_bdev": "aio_bdev", 00:16:41.086 "thin_provision": false, 00:16:41.086 "num_allocated_clusters": 38, 00:16:41.086 "snapshot": false, 00:16:41.086 "clone": false, 00:16:41.086 "esnap_clone": false 00:16:41.086 } 00:16:41.086 } 00:16:41.086 } 00:16:41.086 ] 00:16:41.087 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:41.087 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:41.087 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:41.345 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:41.346 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:41.346 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:41.604 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:41.604 10:37:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e40aaae-fe24-4157-b5e4-cd37188d8a00 00:16:41.863 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3e015b9-b31b-495f-87e1-b102aec216ed 00:16:42.121 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.688 00:16:42.688 real 0m17.895s 00:16:42.688 user 0m17.483s 00:16:42.688 sys 0m1.904s 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:42.688 ************************************ 00:16:42.688 END TEST lvs_grow_clean 00:16:42.688 ************************************ 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:42.688 ************************************ 00:16:42.688 START TEST lvs_grow_dirty 00:16:42.688 ************************************ 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.688 10:37:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:42.947 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:42.947 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:43.205 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:43.205 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:43.205 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:43.464 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:43.464 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:43.464 10:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 44ad6a8a-366e-4553-a07c-b89e2c78baca lvol 150 00:16:43.722 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:16:43.722 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:43.722 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:43.983 [2024-07-23 10:37:32.473171] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:43.983 [2024-07-23 10:37:32.473255] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:43.983 true 00:16:44.242 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:44.242 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:44.500 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:44.500 10:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:44.759 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:16:45.019 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:45.019 [2024-07-23 10:37:33.508414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3810555 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3810555 /var/tmp/bdevperf.sock 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3810555 ']' 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:45.278 10:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:45.536 [2024-07-23 10:37:33.815917] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:45.536 [2024-07-23 10:37:33.816001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810555 ] 00:16:45.536 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.536 [2024-07-23 10:37:33.870235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.536 [2024-07-23 10:37:33.961244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.795 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:45.795 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:45.795 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:46.053 Nvme0n1 00:16:46.053 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:46.311 [ 00:16:46.311 { 00:16:46.311 "name": "Nvme0n1", 00:16:46.311 "aliases": [ 00:16:46.311 "6e31ab84-7307-4cae-bd3a-6d5e88e810ae" 00:16:46.311 ], 00:16:46.311 "product_name": "NVMe disk", 00:16:46.311 "block_size": 4096, 00:16:46.311 "num_blocks": 38912, 00:16:46.311 "uuid": "6e31ab84-7307-4cae-bd3a-6d5e88e810ae", 00:16:46.311 "assigned_rate_limits": { 00:16:46.311 "rw_ios_per_sec": 0, 00:16:46.311 "rw_mbytes_per_sec": 0, 00:16:46.311 "r_mbytes_per_sec": 0, 00:16:46.311 "w_mbytes_per_sec": 0 00:16:46.311 }, 00:16:46.311 "claimed": false, 00:16:46.311 "zoned": false, 00:16:46.311 "supported_io_types": { 00:16:46.311 "read": true, 00:16:46.311 "write": true, 00:16:46.311 "unmap": true, 00:16:46.311 "write_zeroes": true, 00:16:46.312 "flush": true, 00:16:46.312 "reset": true, 00:16:46.312 "compare": true, 00:16:46.312 "compare_and_write": true, 00:16:46.312 "abort": true, 00:16:46.312 "nvme_admin": true, 00:16:46.312 "nvme_io": true 00:16:46.312 }, 00:16:46.312 "memory_domains": [ 00:16:46.312 { 00:16:46.312 "dma_device_id": "system", 00:16:46.312 "dma_device_type": 1 00:16:46.312 } 00:16:46.312 ], 00:16:46.312 "driver_specific": { 00:16:46.312 "nvme": [ 00:16:46.312 { 00:16:46.312 "trid": { 00:16:46.312 "trtype": "TCP", 00:16:46.312 "adrfam": "IPv4", 00:16:46.312 "traddr": "10.0.0.2", 00:16:46.312 "trsvcid": "4420", 00:16:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:46.312 }, 00:16:46.312 "ctrlr_data": { 00:16:46.312 "cntlid": 1, 00:16:46.312 "vendor_id": "0x8086", 00:16:46.312 "model_number": "SPDK bdev Controller", 00:16:46.312 "serial_number": "SPDK0", 00:16:46.312 "firmware_revision": "24.05.1", 00:16:46.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:46.312 "oacs": { 00:16:46.312 "security": 0, 00:16:46.312 "format": 0, 00:16:46.312 "firmware": 0, 00:16:46.312 "ns_manage": 0 00:16:46.312 }, 00:16:46.312 "multi_ctrlr": true, 00:16:46.312 "ana_reporting": false 00:16:46.312 }, 00:16:46.312 "vs": { 00:16:46.312 "nvme_version": "1.3" 00:16:46.312 }, 00:16:46.312 "ns_data": { 00:16:46.312 "id": 1, 00:16:46.312 "can_share": true 00:16:46.312 } 00:16:46.312 } 00:16:46.312 ], 00:16:46.312 "mp_policy": "active_passive" 00:16:46.312 } 00:16:46.312 } 00:16:46.312 ] 00:16:46.312 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3810659 00:16:46.312 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:46.312 10:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:46.572 Running I/O for 10 seconds... 00:16:47.512 Latency(us) 00:16:47.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.512 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:16:47.512 =================================================================================================================== 00:16:47.512 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:16:47.512 00:16:48.452 10:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:48.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.452 Nvme0n1 : 2.00 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:16:48.452 =================================================================================================================== 00:16:48.452 Total : 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:16:48.452 00:16:48.711 true 00:16:48.711 10:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:48.711 10:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:48.971 10:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:48.971 10:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:48.971 10:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3810659 00:16:49.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.541 Nvme0n1 : 3.00 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:16:49.541 =================================================================================================================== 00:16:49.541 Total : 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:16:49.541 00:16:50.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.480 Nvme0n1 : 4.00 14033.75 54.82 0.00 0.00 0.00 0.00 0.00 00:16:50.480 =================================================================================================================== 00:16:50.480 Total : 14033.75 54.82 0.00 0.00 0.00 0.00 0.00 00:16:50.480 00:16:51.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.419 Nvme0n1 : 5.00 14071.80 54.97 0.00 0.00 0.00 0.00 0.00 00:16:51.419 =================================================================================================================== 00:16:51.419 Total : 14071.80 54.97 0.00 0.00 0.00 0.00 0.00 00:16:51.419 00:16:52.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.795 Nvme0n1 : 6.00 14118.33 55.15 0.00 0.00 0.00 0.00 0.00 00:16:52.795 =================================================================================================================== 00:16:52.795 Total : 14118.33 55.15 0.00 0.00 0.00 0.00 0.00 00:16:52.795 00:16:53.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.735 Nvme0n1 : 7.00 14151.57 55.28 0.00 0.00 0.00 0.00 0.00 00:16:53.735 =================================================================================================================== 00:16:53.735 Total : 14151.57 55.28 0.00 0.00 0.00 0.00 0.00 00:16:53.735 00:16:54.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.687 Nvme0n1 : 8.00 14176.50 55.38 0.00 0.00 0.00 0.00 0.00 00:16:54.687 =================================================================================================================== 00:16:54.687 Total : 14176.50 55.38 0.00 0.00 0.00 0.00 0.00 00:16:54.687 00:16:55.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.626 Nvme0n1 : 9.00 14203.00 55.48 0.00 0.00 0.00 0.00 0.00 00:16:55.626 =================================================================================================================== 00:16:55.626 Total : 14203.00 55.48 0.00 0.00 0.00 0.00 0.00 00:16:55.626 00:16:56.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.565 Nvme0n1 : 10.00 14224.40 55.56 0.00 0.00 0.00 0.00 0.00 00:16:56.565 =================================================================================================================== 00:16:56.565 Total : 14224.40 55.56 0.00 0.00 0.00 0.00 0.00 00:16:56.565 00:16:56.565 00:16:56.565 Latency(us) 00:16:56.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.565 Nvme0n1 : 10.01 14222.37 55.56 0.00 0.00 8994.24 4975.88 19126.80 00:16:56.565 =================================================================================================================== 00:16:56.565 Total : 14222.37 55.56 0.00 0.00 8994.24 4975.88 19126.80 00:16:56.565 0 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3810555 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3810555 ']' 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3810555 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3810555 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3810555' 00:16:56.565 killing process with pid 3810555 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3810555 00:16:56.565 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.565 00:16:56.565 Latency(us) 00:16:56.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.565 =================================================================================================================== 00:16:56.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.565 10:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3810555 00:16:56.825 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.083 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:57.341 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:57.341 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:57.599 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:57.599 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:57.599 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3808478 00:16:57.599 10:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3808478 00:16:57.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3808478 Killed "${NVMF_APP[@]}" "$@" 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3811676 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3811676 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3811676 ']' 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:57.599 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.599 [2024-07-23 10:37:46.075968] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:57.599 [2024-07-23 10:37:46.076057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.858 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.858 [2024-07-23 10:37:46.143542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.858 [2024-07-23 10:37:46.229356] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.858 [2024-07-23 10:37:46.229417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.858 [2024-07-23 10:37:46.229432] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.858 [2024-07-23 10:37:46.229445] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.858 [2024-07-23 10:37:46.229457] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.858 [2024-07-23 10:37:46.229502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.858 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:58.426 [2024-07-23 10:37:46.634080] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:58.426 [2024-07-23 10:37:46.634227] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:58.426 [2024-07-23 10:37:46.634284] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:58.426 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:58.686 10:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e31ab84-7307-4cae-bd3a-6d5e88e810ae -t 2000 00:16:58.944 [ 00:16:58.944 { 00:16:58.944 "name": "6e31ab84-7307-4cae-bd3a-6d5e88e810ae", 00:16:58.944 "aliases": [ 00:16:58.944 "lvs/lvol" 00:16:58.944 ], 00:16:58.944 "product_name": "Logical Volume", 00:16:58.944 "block_size": 4096, 00:16:58.944 "num_blocks": 38912, 00:16:58.944 "uuid": "6e31ab84-7307-4cae-bd3a-6d5e88e810ae", 00:16:58.944 "assigned_rate_limits": { 00:16:58.944 "rw_ios_per_sec": 0, 00:16:58.944 "rw_mbytes_per_sec": 0, 00:16:58.944 "r_mbytes_per_sec": 0, 00:16:58.944 "w_mbytes_per_sec": 0 00:16:58.944 }, 00:16:58.944 "claimed": false, 00:16:58.944 "zoned": false, 00:16:58.944 "supported_io_types": { 00:16:58.944 "read": true, 00:16:58.944 "write": true, 00:16:58.944 "unmap": true, 00:16:58.944 "write_zeroes": true, 00:16:58.944 "flush": false, 00:16:58.944 "reset": true, 00:16:58.944 "compare": false, 00:16:58.944 "compare_and_write": false, 00:16:58.944 "abort": false, 00:16:58.944 "nvme_admin": false, 00:16:58.944 "nvme_io": false 00:16:58.944 }, 00:16:58.944 "driver_specific": { 00:16:58.944 "lvol": { 00:16:58.944 "lvol_store_uuid": "44ad6a8a-366e-4553-a07c-b89e2c78baca", 00:16:58.944 "base_bdev": "aio_bdev", 00:16:58.944 "thin_provision": false, 00:16:58.944 "num_allocated_clusters": 38, 00:16:58.944 "snapshot": false, 00:16:58.944 "clone": false, 00:16:58.944 "esnap_clone": false 00:16:58.944 } 00:16:58.944 } 00:16:58.944 } 00:16:58.944 ] 00:16:58.944 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:58.944 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:58.944 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:59.203 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:59.203 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:59.203 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:59.462 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:59.462 10:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:59.720 [2024-07-23 10:37:48.111660] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:59.720 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:59.720 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:59.721 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:16:59.979 request: 00:16:59.979 { 00:16:59.979 "uuid": "44ad6a8a-366e-4553-a07c-b89e2c78baca", 00:16:59.979 "method": "bdev_lvol_get_lvstores", 00:16:59.979 "req_id": 1 00:16:59.979 } 00:16:59.979 Got JSON-RPC error response 00:16:59.979 response: 00:16:59.979 { 00:16:59.979 "code": -19, 00:16:59.979 "message": "No such device" 00:16:59.979 } 00:16:59.979 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:59.979 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:59.979 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:59.979 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:59.979 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:00.239 aio_bdev 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:00.519 10:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:00.783 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e31ab84-7307-4cae-bd3a-6d5e88e810ae -t 2000 00:17:01.041 [ 00:17:01.041 { 00:17:01.041 "name": "6e31ab84-7307-4cae-bd3a-6d5e88e810ae", 00:17:01.041 "aliases": [ 00:17:01.041 "lvs/lvol" 00:17:01.041 ], 00:17:01.041 "product_name": "Logical Volume", 00:17:01.041 "block_size": 4096, 00:17:01.041 "num_blocks": 38912, 00:17:01.041 "uuid": "6e31ab84-7307-4cae-bd3a-6d5e88e810ae", 00:17:01.041 "assigned_rate_limits": { 00:17:01.041 "rw_ios_per_sec": 0, 00:17:01.041 "rw_mbytes_per_sec": 0, 00:17:01.041 "r_mbytes_per_sec": 0, 00:17:01.041 "w_mbytes_per_sec": 0 00:17:01.041 }, 00:17:01.041 "claimed": false, 00:17:01.041 "zoned": false, 00:17:01.041 "supported_io_types": { 00:17:01.041 "read": true, 00:17:01.041 "write": true, 00:17:01.041 "unmap": true, 00:17:01.041 "write_zeroes": true, 00:17:01.041 "flush": false, 00:17:01.041 "reset": true, 00:17:01.041 "compare": false, 00:17:01.041 "compare_and_write": false, 00:17:01.041 "abort": false, 00:17:01.041 "nvme_admin": false, 00:17:01.041 "nvme_io": false 00:17:01.041 }, 00:17:01.041 "driver_specific": { 00:17:01.041 "lvol": { 00:17:01.041 "lvol_store_uuid": "44ad6a8a-366e-4553-a07c-b89e2c78baca", 00:17:01.041 "base_bdev": "aio_bdev", 00:17:01.041 "thin_provision": false, 00:17:01.041 "num_allocated_clusters": 38, 00:17:01.041 "snapshot": false, 00:17:01.041 "clone": false, 00:17:01.041 "esnap_clone": false 00:17:01.041 } 00:17:01.041 } 00:17:01.041 } 00:17:01.041 ] 00:17:01.041 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:01.041 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:17:01.041 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:01.300 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:01.300 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:17:01.300 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:01.559 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:01.559 10:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e31ab84-7307-4cae-bd3a-6d5e88e810ae 00:17:01.817 10:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44ad6a8a-366e-4553-a07c-b89e2c78baca 00:17:02.076 10:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:02.334 10:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:02.594 00:17:02.594 real 0m19.865s 00:17:02.594 user 0m49.972s 00:17:02.594 sys 0m4.510s 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:02.594 ************************************ 00:17:02.594 END TEST lvs_grow_dirty 00:17:02.594 ************************************ 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:02.594 nvmf_trace.0 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:02.594 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.595 rmmod nvme_tcp 00:17:02.595 rmmod nvme_fabrics 00:17:02.595 rmmod nvme_keyring 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3811676 ']' 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3811676 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3811676 ']' 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3811676 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:02.595 10:37:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3811676 00:17:02.595 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:02.595 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:02.595 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3811676' 00:17:02.595 killing process with pid 3811676 00:17:02.595 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3811676 00:17:02.595 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3811676 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.855 10:37:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.765 10:37:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:04.765 00:17:04.765 real 0m42.707s 00:17:04.765 user 1m13.620s 00:17:04.765 sys 0m8.077s 00:17:04.765 10:37:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:04.765 10:37:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:04.765 ************************************ 00:17:04.765 END TEST nvmf_lvs_grow 00:17:04.765 ************************************ 00:17:04.765 10:37:53 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:04.765 10:37:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:04.765 10:37:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:04.765 10:37:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.024 ************************************ 00:17:05.024 START TEST nvmf_bdev_io_wait 00:17:05.024 ************************************ 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:05.024 * Looking for test storage... 00:17:05.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.024 10:37:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:06.929 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:06.929 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:06.929 Found net devices under 0000:08:00.0: cvl_0_0 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:06.929 Found net devices under 0000:08:00.1: cvl_0_1 00:17:06.929 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.930 10:37:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:17:06.930 00:17:06.930 --- 10.0.0.2 ping statistics --- 00:17:06.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.930 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:17:06.930 00:17:06.930 --- 10.0.0.1 ping statistics --- 00:17:06.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.930 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3813627 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3813627 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3813627 ']' 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 [2024-07-23 10:37:55.119016] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.930 [2024-07-23 10:37:55.119110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.930 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.930 [2024-07-23 10:37:55.185985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.930 [2024-07-23 10:37:55.278809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.930 [2024-07-23 10:37:55.278876] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.930 [2024-07-23 10:37:55.278892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.930 [2024-07-23 10:37:55.278905] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.930 [2024-07-23 10:37:55.278916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.930 [2024-07-23 10:37:55.278995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.930 [2024-07-23 10:37:55.279077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.930 [2024-07-23 10:37:55.279022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.930 [2024-07-23 10:37:55.279080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.930 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 [2024-07-23 10:37:55.463023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 Malloc0 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 [2024-07-23 10:37:55.525053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3813745 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3813747 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.189 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.189 { 00:17:07.189 "params": { 00:17:07.189 "name": "Nvme$subsystem", 00:17:07.189 "trtype": "$TEST_TRANSPORT", 00:17:07.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.189 "adrfam": "ipv4", 00:17:07.189 "trsvcid": "$NVMF_PORT", 00:17:07.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.189 "hdgst": ${hdgst:-false}, 00:17:07.189 "ddgst": ${ddgst:-false} 00:17:07.189 }, 00:17:07.189 "method": "bdev_nvme_attach_controller" 00:17:07.190 } 00:17:07.190 EOF 00:17:07.190 )") 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3813750 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.190 { 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme$subsystem", 00:17:07.190 "trtype": "$TEST_TRANSPORT", 00:17:07.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "$NVMF_PORT", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.190 "hdgst": ${hdgst:-false}, 00:17:07.190 "ddgst": ${ddgst:-false} 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 } 00:17:07.190 EOF 00:17:07.190 )") 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3813754 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.190 { 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme$subsystem", 00:17:07.190 "trtype": "$TEST_TRANSPORT", 00:17:07.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "$NVMF_PORT", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.190 "hdgst": ${hdgst:-false}, 00:17:07.190 "ddgst": ${ddgst:-false} 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 } 00:17:07.190 EOF 00:17:07.190 )") 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.190 { 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme$subsystem", 00:17:07.190 "trtype": "$TEST_TRANSPORT", 00:17:07.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "$NVMF_PORT", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.190 "hdgst": ${hdgst:-false}, 00:17:07.190 "ddgst": ${ddgst:-false} 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 } 00:17:07.190 EOF 00:17:07.190 )") 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3813745 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme1", 00:17:07.190 "trtype": "tcp", 00:17:07.190 "traddr": "10.0.0.2", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "4420", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.190 "hdgst": false, 00:17:07.190 "ddgst": false 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 }' 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme1", 00:17:07.190 "trtype": "tcp", 00:17:07.190 "traddr": "10.0.0.2", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "4420", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.190 "hdgst": false, 00:17:07.190 "ddgst": false 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 }' 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme1", 00:17:07.190 "trtype": "tcp", 00:17:07.190 "traddr": "10.0.0.2", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "4420", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.190 "hdgst": false, 00:17:07.190 "ddgst": false 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 }' 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:07.190 10:37:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.190 "params": { 00:17:07.190 "name": "Nvme1", 00:17:07.190 "trtype": "tcp", 00:17:07.190 "traddr": "10.0.0.2", 00:17:07.190 "adrfam": "ipv4", 00:17:07.190 "trsvcid": "4420", 00:17:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.190 "hdgst": false, 00:17:07.190 "ddgst": false 00:17:07.190 }, 00:17:07.190 "method": "bdev_nvme_attach_controller" 00:17:07.190 }' 00:17:07.190 [2024-07-23 10:37:55.574558] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:07.190 [2024-07-23 10:37:55.574572] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:07.190 [2024-07-23 10:37:55.574655] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 10:37:55.574655] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:07.190 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:07.190 [2024-07-23 10:37:55.576242] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:07.190 [2024-07-23 10:37:55.576249] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:07.190 [2024-07-23 10:37:55.576331] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 10:37:55.576332] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:07.190 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:07.190 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.449 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.449 [2024-07-23 10:37:55.719300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.449 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.449 [2024-07-23 10:37:55.785993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:07.449 [2024-07-23 10:37:55.802594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.449 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.449 [2024-07-23 10:37:55.857119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.449 [2024-07-23 10:37:55.872476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.449 [2024-07-23 10:37:55.916524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.449 [2024-07-23 10:37:55.923021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.708 [2024-07-23 10:37:55.982188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:07.708 Running I/O for 1 seconds... 00:17:07.708 Running I/O for 1 seconds... 00:17:07.708 Running I/O for 1 seconds... 00:17:07.967 Running I/O for 1 seconds... 00:17:08.904 00:17:08.904 Latency(us) 00:17:08.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.904 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:08.904 Nvme1n1 : 1.00 145984.93 570.25 0.00 0.00 873.44 347.40 1080.13 00:17:08.904 =================================================================================================================== 00:17:08.904 Total : 145984.93 570.25 0.00 0.00 873.44 347.40 1080.13 00:17:08.904 00:17:08.904 Latency(us) 00:17:08.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.904 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:08.904 Nvme1n1 : 1.02 6158.20 24.06 0.00 0.00 20519.90 9369.22 33399.09 00:17:08.904 =================================================================================================================== 00:17:08.904 Total : 6158.20 24.06 0.00 0.00 20519.90 9369.22 33399.09 00:17:08.904 00:17:08.904 Latency(us) 00:17:08.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.904 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:08.904 Nvme1n1 : 1.01 7951.87 31.06 0.00 0.00 16001.10 10971.21 28544.57 00:17:08.904 =================================================================================================================== 00:17:08.904 Total : 7951.87 31.06 0.00 0.00 16001.10 10971.21 28544.57 00:17:08.904 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3813747 00:17:08.904 00:17:08.904 Latency(us) 00:17:08.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.904 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:08.904 Nvme1n1 : 1.01 6645.47 25.96 0.00 0.00 19203.14 5364.24 48156.82 00:17:08.904 =================================================================================================================== 00:17:08.904 Total : 6645.47 25.96 0.00 0.00 19203.14 5364.24 48156.82 00:17:08.904 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3813750 00:17:08.904 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3813754 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.163 rmmod nvme_tcp 00:17:09.163 rmmod nvme_fabrics 00:17:09.163 rmmod nvme_keyring 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3813627 ']' 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3813627 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3813627 ']' 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3813627 00:17:09.163 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3813627 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3813627' 00:17:09.164 killing process with pid 3813627 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3813627 00:17:09.164 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3813627 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.424 10:37:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.329 10:37:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.329 00:17:11.329 real 0m6.523s 00:17:11.329 user 0m15.655s 00:17:11.329 sys 0m2.951s 00:17:11.329 10:37:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.329 10:37:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 ************************************ 00:17:11.329 END TEST nvmf_bdev_io_wait 00:17:11.329 ************************************ 00:17:11.329 10:37:59 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:11.329 10:37:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:11.329 10:37:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.329 10:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.588 ************************************ 00:17:11.588 START TEST nvmf_queue_depth 00:17:11.588 ************************************ 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:11.588 * Looking for test storage... 00:17:11.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.588 10:37:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:13.536 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.536 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:13.537 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:13.537 Found net devices under 0000:08:00.0: cvl_0_0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:13.537 Found net devices under 0000:08:00.1: cvl_0_1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:17:13.537 00:17:13.537 --- 10.0.0.2 ping statistics --- 00:17:13.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.537 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:13.537 00:17:13.537 --- 10.0.0.1 ping statistics --- 00:17:13.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.537 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3815376 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3815376 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3815376 ']' 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.537 [2024-07-23 10:38:01.719787] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:13.537 [2024-07-23 10:38:01.719882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.537 [2024-07-23 10:38:01.785282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.537 [2024-07-23 10:38:01.875080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.537 [2024-07-23 10:38:01.875150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.537 [2024-07-23 10:38:01.875166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.537 [2024-07-23 10:38:01.875178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.537 [2024-07-23 10:38:01.875199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.537 [2024-07-23 10:38:01.875231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.537 10:38:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.537 [2024-07-23 10:38:02.007415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.537 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:13.538 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.538 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 Malloc0 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 [2024-07-23 10:38:02.071247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3815424 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3815424 /var/tmp/bdevperf.sock 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3815424 ']' 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.796 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 [2024-07-23 10:38:02.126151] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:13.796 [2024-07-23 10:38:02.126243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815424 ] 00:17:13.796 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.796 [2024-07-23 10:38:02.187501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.796 [2024-07-23 10:38:02.278959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:14.055 NVMe0n1 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.055 10:38:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.313 Running I/O for 10 seconds... 00:17:24.292 00:17:24.293 Latency(us) 00:17:24.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.293 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:24.293 Verification LBA range: start 0x0 length 0x4000 00:17:24.293 NVMe0n1 : 10.11 7888.51 30.81 0.00 0.00 129178.97 28738.75 78060.66 00:17:24.293 =================================================================================================================== 00:17:24.293 Total : 7888.51 30.81 0.00 0.00 129178.97 28738.75 78060.66 00:17:24.293 0 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3815424 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3815424 ']' 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3815424 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3815424 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3815424' 00:17:24.293 killing process with pid 3815424 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3815424 00:17:24.293 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.293 00:17:24.293 Latency(us) 00:17:24.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.293 =================================================================================================================== 00:17:24.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.293 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3815424 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.552 rmmod nvme_tcp 00:17:24.552 rmmod nvme_fabrics 00:17:24.552 rmmod nvme_keyring 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3815376 ']' 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3815376 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3815376 ']' 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3815376 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3815376 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3815376' 00:17:24.552 killing process with pid 3815376 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3815376 00:17:24.552 10:38:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3815376 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.811 10:38:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.720 10:38:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:26.720 00:17:26.720 real 0m15.363s 00:17:26.720 user 0m21.141s 00:17:26.720 sys 0m3.140s 00:17:26.720 10:38:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.720 10:38:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.720 ************************************ 00:17:26.720 END TEST nvmf_queue_depth 00:17:26.720 ************************************ 00:17:26.980 10:38:15 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:26.980 10:38:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:26.980 10:38:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.980 10:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 ************************************ 00:17:26.980 START TEST nvmf_target_multipath 00:17:26.980 ************************************ 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:26.981 * Looking for test storage... 00:17:26.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.981 10:38:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:28.886 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:28.886 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.886 10:38:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:28.886 Found net devices under 0000:08:00.0: cvl_0_0 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:28.886 Found net devices under 0000:08:00.1: cvl_0_1 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.886 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:17:28.887 00:17:28.887 --- 10.0.0.2 ping statistics --- 00:17:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.887 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:17:28.887 00:17:28.887 --- 10.0.0.1 ping statistics --- 00:17:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.887 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:28.887 only one NIC for nvmf test 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.887 rmmod nvme_tcp 00:17:28.887 rmmod nvme_fabrics 00:17:28.887 rmmod nvme_keyring 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.887 10:38:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.793 00:17:30.793 real 0m4.002s 00:17:30.793 user 0m0.663s 00:17:30.793 sys 0m1.324s 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:30.793 10:38:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:30.793 ************************************ 00:17:30.793 END TEST nvmf_target_multipath 00:17:30.793 ************************************ 00:17:30.793 10:38:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:30.793 10:38:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:30.793 10:38:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.793 10:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.052 ************************************ 00:17:31.052 START TEST nvmf_zcopy 00:17:31.052 ************************************ 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:31.052 * Looking for test storage... 00:17:31.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.052 10:38:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.954 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:32.954 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:32.955 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.955 10:38:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:32.955 Found net devices under 0000:08:00.0: cvl_0_0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:32.955 Found net devices under 0000:08:00.1: cvl_0_1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:17:32.955 00:17:32.955 --- 10.0.0.2 ping statistics --- 00:17:32.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.955 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:17:32.955 00:17:32.955 --- 10.0.0.1 ping statistics --- 00:17:32.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.955 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3819370 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3819370 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3819370 ']' 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.955 [2024-07-23 10:38:21.198935] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:32.955 [2024-07-23 10:38:21.199032] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.955 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.955 [2024-07-23 10:38:21.263378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.955 [2024-07-23 10:38:21.349900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.955 [2024-07-23 10:38:21.349967] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.955 [2024-07-23 10:38:21.349983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.955 [2024-07-23 10:38:21.349998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.955 [2024-07-23 10:38:21.350009] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.955 [2024-07-23 10:38:21.350039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.955 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.212 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 [2024-07-23 10:38:21.480774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 [2024-07-23 10:38:21.496938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 malloc0 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:33.213 { 00:17:33.213 "params": { 00:17:33.213 "name": "Nvme$subsystem", 00:17:33.213 "trtype": "$TEST_TRANSPORT", 00:17:33.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:33.213 "adrfam": "ipv4", 00:17:33.213 "trsvcid": "$NVMF_PORT", 00:17:33.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:33.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:33.213 "hdgst": ${hdgst:-false}, 00:17:33.213 "ddgst": ${ddgst:-false} 00:17:33.213 }, 00:17:33.213 "method": "bdev_nvme_attach_controller" 00:17:33.213 } 00:17:33.213 EOF 00:17:33.213 )") 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:33.213 10:38:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:33.213 "params": { 00:17:33.213 "name": "Nvme1", 00:17:33.213 "trtype": "tcp", 00:17:33.213 "traddr": "10.0.0.2", 00:17:33.213 "adrfam": "ipv4", 00:17:33.213 "trsvcid": "4420", 00:17:33.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.213 "hdgst": false, 00:17:33.213 "ddgst": false 00:17:33.213 }, 00:17:33.213 "method": "bdev_nvme_attach_controller" 00:17:33.213 }' 00:17:33.213 [2024-07-23 10:38:21.577420] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:33.213 [2024-07-23 10:38:21.577528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819397 ] 00:17:33.213 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.213 [2024-07-23 10:38:21.639579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.470 [2024-07-23 10:38:21.731232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.728 Running I/O for 10 seconds... 00:17:43.725 00:17:43.725 Latency(us) 00:17:43.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:43.725 Verification LBA range: start 0x0 length 0x1000 00:17:43.725 Nvme1n1 : 10.01 5412.15 42.28 0.00 0.00 23577.07 527.93 32428.18 00:17:43.725 =================================================================================================================== 00:17:43.725 Total : 5412.15 42.28 0.00 0.00 23577.07 527.93 32428.18 00:17:43.983 10:38:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3820384 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.984 { 00:17:43.984 "params": { 00:17:43.984 "name": "Nvme$subsystem", 00:17:43.984 "trtype": "$TEST_TRANSPORT", 00:17:43.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.984 "adrfam": "ipv4", 00:17:43.984 "trsvcid": "$NVMF_PORT", 00:17:43.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.984 "hdgst": ${hdgst:-false}, 00:17:43.984 "ddgst": ${ddgst:-false} 00:17:43.984 }, 00:17:43.984 "method": "bdev_nvme_attach_controller" 00:17:43.984 } 00:17:43.984 EOF 00:17:43.984 )") 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:43.984 [2024-07-23 10:38:32.278907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.278957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:43.984 10:38:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.984 "params": { 00:17:43.984 "name": "Nvme1", 00:17:43.984 "trtype": "tcp", 00:17:43.984 "traddr": "10.0.0.2", 00:17:43.984 "adrfam": "ipv4", 00:17:43.984 "trsvcid": "4420", 00:17:43.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.984 "hdgst": false, 00:17:43.984 "ddgst": false 00:17:43.984 }, 00:17:43.984 "method": "bdev_nvme_attach_controller" 00:17:43.984 }' 00:17:43.984 [2024-07-23 10:38:32.286871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.286897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.294890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.294915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.302910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.302934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.310934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.310959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.318953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.318977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.319423] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:43.984 [2024-07-23 10:38:32.319518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820384 ] 00:17:43.984 [2024-07-23 10:38:32.326974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.326998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.334996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.335019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.343017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.343040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.984 [2024-07-23 10:38:32.351039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.351062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.359062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.359086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.367083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.367106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.375119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.375142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.381763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.984 [2024-07-23 10:38:32.383138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.383167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.391233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.391285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.399232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.399280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.407206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.407232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.415238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.415267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.423262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.423292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.431338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.431389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.439347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.439408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.447320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.447347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.455358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.455391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.463385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.463415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.471389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.471416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-23 10:38:32.472281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.984 [2024-07-23 10:38:32.479400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-23 10:38:32.479423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.487504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.487550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.495528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.495578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.503549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.503598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.511576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.511625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.519593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.519643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.527590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.527636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.535632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.535681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.543659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.543708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.551657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.551701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.559633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.559660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.567779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.567807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.575741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.575767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.583764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.583790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.591787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.591814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.599805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.599831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.607828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.243 [2024-07-23 10:38:32.607854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.243 [2024-07-23 10:38:32.615849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.615874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.623871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.623897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.631901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.631929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 Running I/O for 5 seconds... 00:17:44.244 [2024-07-23 10:38:32.639916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.639948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.652412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.652444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.663046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.663077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.676309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.676340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.688666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.688695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.701061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.701091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.713237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.713267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.725218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.725249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.244 [2024-07-23 10:38:32.737598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.244 [2024-07-23 10:38:32.737628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.749461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.749500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.761364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.761394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.773579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.773608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.785753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.785782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.798037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.798067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.810355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.810384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.822604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.822634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.834925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.834955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.847375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.847405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.859471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.859510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.871599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.871633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.885317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.885346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.896207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.896236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.907710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.907739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.919707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.919736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.932117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.932146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.944541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.944571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.956794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.956823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.968571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.968600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.980708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.980737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:32.992923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:32.992951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.503 [2024-07-23 10:38:33.004770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.503 [2024-07-23 10:38:33.004800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.016705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.016734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.028995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.029023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.040935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.040964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.053077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.053106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.065348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.065377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.077735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.077764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.090184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.090214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.102592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.102621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.115349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.115379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.127768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.127797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.140153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.140182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.152417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.152447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.164593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.164625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.176843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.176877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.188523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.188562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.200437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.200492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.212333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.212363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.225182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.225211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.237221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.237250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.249038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.249088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.762 [2024-07-23 10:38:33.260956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.762 [2024-07-23 10:38:33.260984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.273101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.273132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.285348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.285377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.297528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.297558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.309662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.309691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.321505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.321533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.333753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.333783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.345613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.345642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.357524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.357554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.369666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.369696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.382421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.382450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.394411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.394441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.406394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.406423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.418579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.418608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.430548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.430577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.442566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.442595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.454783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.454812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.467149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.467181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.478704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.478746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.490937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.490966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.503190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.503219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.024 [2024-07-23 10:38:33.515133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.024 [2024-07-23 10:38:33.515163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.529229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.529260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.539676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.539705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.552086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.552119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.564189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.564219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.576206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.576235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.588105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.588135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.600159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.600188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.612080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.612109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.624315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.624344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.636163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.636206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.647974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.648007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.659631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.659660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.671308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.671339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.683098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.683128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.694839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.694869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.706751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.706792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.718886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.718921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.731053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.731083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.743523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.743552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.755807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.755837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.768210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.768240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.285 [2024-07-23 10:38:33.780355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.285 [2024-07-23 10:38:33.780383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.792357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.792395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.804504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.804533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.816186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.816224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.828322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.828358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.840444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.840474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-23 10:38:33.852320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-23 10:38:33.852358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.864586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.864616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.876492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.876520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.888263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.888291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.900448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.900478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.912397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.912430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.924137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.924166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.936420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.936460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.948332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.948364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.960421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.960451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.972360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.972389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.984145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.984174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:33.995989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:33.996018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:34.008107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:34.008136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:34.019800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:34.019829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:34.031993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:34.032023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.545 [2024-07-23 10:38:34.043666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.545 [2024-07-23 10:38:34.043695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.055436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.055466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.067566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.067595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.079947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.079976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.092239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.092268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.106747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.106777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-23 10:38:34.118728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-23 10:38:34.118758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.131064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.131093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.143164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.143193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.155122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.155151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.166857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.166886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.178710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.178740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.190978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.191007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.205105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.205135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.216863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.216893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.229254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.229286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.241164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.241193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.253283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.253313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.265100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.265130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.277293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.277327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.289009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.289041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-23 10:38:34.300599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-23 10:38:34.300629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.312703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.312733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.325066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.325095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.337133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.337165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.351142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.351170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.362163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.362193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.374930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.374960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.387050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.387080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.399690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.399719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.412157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.412186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.424333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.424370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.436692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.436721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.448898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.448928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.461310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.461347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.473272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.473302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.484980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.485015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.496762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.496791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.509021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.509053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.520760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.520789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.532917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.532946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.544692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.544721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.063 [2024-07-23 10:38:34.556897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.063 [2024-07-23 10:38:34.556927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.568941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.568973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.582781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.582810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.593957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.593987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.606658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.606687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.618807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.618844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.631147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.631192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.643044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.643074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.655293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.655322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.667276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.667305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.679707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.679736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.691719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.691748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.704222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.704252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.716270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.716307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.728477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.728513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.740341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.740370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.752810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.752839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.764811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.764842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.776612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.776641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.788870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.788901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.800745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.800782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.813067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.813097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.324 [2024-07-23 10:38:34.825081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.324 [2024-07-23 10:38:34.825111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.584 [2024-07-23 10:38:34.839560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.839590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.851089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.851119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.863124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.863159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.875783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.875813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.889933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.889975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.901837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.901866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.913690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.913719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.925874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.925906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.937989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.938018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.950236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.950271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.962064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.962099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.974265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.974296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.986289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.986318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:34.998263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:34.998292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.010530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.010561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.022665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.022694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.037014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.037044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.047897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.047927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.060050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.060079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.072300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.072330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.585 [2024-07-23 10:38:35.084307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.585 [2024-07-23 10:38:35.084348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.096605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.096635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.108691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.108722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.120870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.120898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.133007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.133037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.144919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.144948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.158586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.158615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.169548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.169577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.181503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.181533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.193815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.193845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.205873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.205901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.217825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.217854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.229550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.229579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.241624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.241653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.253605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.253635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.265404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.265434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.277570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.277599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.289735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.289765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.301968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.301998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.313828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.313867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.325596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.325626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-23 10:38:35.337369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-23 10:38:35.337401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.349701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.349732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.361676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.361706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.373689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.373718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.385342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.385371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.397083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.397115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.408813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.408842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.420539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.420568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.432558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.432596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.444609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.444639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.456706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.456737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.467795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.467826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.479769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.479799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.491875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.491905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.503827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.503857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.516082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.516112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.528425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.528456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.540636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.540678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.553256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.553287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.565373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.565403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.577719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.577749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.589628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.589659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.105 [2024-07-23 10:38:35.601610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.105 [2024-07-23 10:38:35.601644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.614042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.614071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.625868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.625897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.637808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.637838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.649843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.649873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.661659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.661689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.673000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.673030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.685175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.685205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.697029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.697059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.708926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.708959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.720880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.720910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.733093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.733123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.745429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.745458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.757615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.757644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.769523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.769585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.783404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.783434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.794892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.794922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.806659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.806689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.818625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.818654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.830648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.830678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.842465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.842503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.854506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.854537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.366 [2024-07-23 10:38:35.868449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.366 [2024-07-23 10:38:35.868488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.879617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.879647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.891451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.891493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.903851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.903880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.916064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.916093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.927990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.928018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.940429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.940458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.952535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.952564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.964791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.964820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.978688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.978717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:35.990013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:35.990042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.002168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.002197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.014258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.014289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.026522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.026551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.038509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.038537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.050528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.050557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.062957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.062994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.075357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.075385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.087434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.087463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.099256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.099285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.111426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.111454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.627 [2024-07-23 10:38:36.123472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.627 [2024-07-23 10:38:36.123510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.135861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.135898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.147695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.147723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.159319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.159347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.171216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.171245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.185540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.185588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.197422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.197451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.209500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.209529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.221648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.221677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.233636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.233665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.245933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.245962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.258169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.258197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.270403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.270431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.282446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.282474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.294567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.294596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.306539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.306569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.318685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.318714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.330650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.330679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.342719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.342748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.356914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.356944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.367808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.367836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.888 [2024-07-23 10:38:36.379977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.888 [2024-07-23 10:38:36.380005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.148 [2024-07-23 10:38:36.392229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.148 [2024-07-23 10:38:36.392260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.148 [2024-07-23 10:38:36.404211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.148 [2024-07-23 10:38:36.404241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.416007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.416037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.427067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.427097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.439148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.439178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.450860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.450889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.463109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.463139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.475215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.475246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.487308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.487338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.499521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.499551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.512060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.512090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.524259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.524288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.536572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.536601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.548288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.548317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.560169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.560197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.572258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.572287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.584451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.584489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.596749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.596780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.609225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.609253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.621696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.621725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.633684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.633713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.149 [2024-07-23 10:38:36.645862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.149 [2024-07-23 10:38:36.645891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.657845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.657875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.669933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.669962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.681907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.681937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.693816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.693845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.707683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.707712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.719034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.719063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.730825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.730856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.742856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.742885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.754814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.754844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.766847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.766876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.779311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.779340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.791283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.791312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.803582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.803611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.815845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.815874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.828194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.828223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.840539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.840568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.852307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.852336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.864762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.864791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.877120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.877150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.888887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.888916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.409 [2024-07-23 10:38:36.900701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.409 [2024-07-23 10:38:36.900731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.912603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.912644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.924967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.924996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.937393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.937423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.949697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.949727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.961994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.962023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.974247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.974277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:36.986257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:36.986287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.000244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.000275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.011351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.011380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.023664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.023694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.036053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.036083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.048217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.048247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.060271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.060302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.072219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.072248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.084190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.084219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.096218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.096247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.108487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.108516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.120508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.120538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.134665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.134695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.146174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.146216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.158558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.158588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.670 [2024-07-23 10:38:37.170915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.670 [2024-07-23 10:38:37.170945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.183135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.183165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.195117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.195147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.207315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.207345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.219593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.219623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.231619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.231648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.243638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.243668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.255965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.255996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.268218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.268247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.280339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.280369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.292664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.292694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.304628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.304657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.318802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.318832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.330738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.330768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.342259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.342289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.354254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.354283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.365839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.365868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.931 [2024-07-23 10:38:37.377579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.931 [2024-07-23 10:38:37.377620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.932 [2024-07-23 10:38:37.389756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.932 [2024-07-23 10:38:37.389785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.932 [2024-07-23 10:38:37.403549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.932 [2024-07-23 10:38:37.403588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.932 [2024-07-23 10:38:37.415280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.932 [2024-07-23 10:38:37.415309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.932 [2024-07-23 10:38:37.427709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.932 [2024-07-23 10:38:37.427739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.439720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.439750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.451740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.451769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.463680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.463709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.475510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.475539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.487868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.487897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.500283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.500313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.512446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.512476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.524833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.524862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.537115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.537148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.548975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.549004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.560883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.560917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.577202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.577245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.589498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.589529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.603625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.603659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.615907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.615947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.629926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.629958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.641431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.641463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.653607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.653636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.662471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.662506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 00:17:49.193 Latency(us) 00:17:49.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.193 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:49.193 Nvme1n1 : 5.01 10507.80 82.09 0.00 0.00 12164.00 5485.61 23301.69 00:17:49.193 =================================================================================================================== 00:17:49.193 Total : 10507.80 82.09 0.00 0.00 12164.00 5485.61 23301.69 00:17:49.193 [2024-07-23 10:38:37.669322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.669348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.677347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.677376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.685449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.685522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.193 [2024-07-23 10:38:37.693468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.193 [2024-07-23 10:38:37.693537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.701475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.701545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.709505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.709563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.717544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.717602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.725564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.725627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.733589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.733649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.741608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.741665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.749623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.749680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.757624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.757673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.765648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.765699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.773689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.773748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.781691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.781741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.789708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.789757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.797764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.797826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.805777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.805821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.813716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.813738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.821736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.821759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 [2024-07-23 10:38:37.829758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.454 [2024-07-23 10:38:37.829781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3820384) - No such process 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3820384 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.454 delay0 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.454 10:38:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:49.454 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.454 [2024-07-23 10:38:37.912456] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:56.031 Initializing NVMe Controllers 00:17:56.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.031 Initialization complete. Launching workers. 00:17:56.031 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 738 00:17:56.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1025, failed to submit 33 00:17:56.031 success 862, unsuccess 163, failed 0 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.031 rmmod nvme_tcp 00:17:56.031 rmmod nvme_fabrics 00:17:56.031 rmmod nvme_keyring 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3819370 ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3819370 ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3819370' 00:17:56.031 killing process with pid 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3819370 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.031 10:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.573 10:38:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.573 00:17:58.573 real 0m27.207s 00:17:58.573 user 0m40.985s 00:17:58.573 sys 0m7.592s 00:17:58.573 10:38:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.573 10:38:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.573 ************************************ 00:17:58.573 END TEST nvmf_zcopy 00:17:58.573 ************************************ 00:17:58.573 10:38:46 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:58.573 10:38:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.573 10:38:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.573 10:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.573 ************************************ 00:17:58.573 START TEST nvmf_nmic 00:17:58.573 ************************************ 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:58.573 * Looking for test storage... 00:17:58.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.573 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.574 10:38:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.951 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:59.952 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:59.952 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:59.952 Found net devices under 0000:08:00.0: cvl_0_0 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:59.952 Found net devices under 0000:08:00.1: cvl_0_1 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:17:59.952 00:17:59.952 --- 10.0.0.2 ping statistics --- 00:17:59.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.952 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:59.952 00:17:59.952 --- 10.0.0.1 ping statistics --- 00:17:59.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.952 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3822904 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3822904 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3822904 ']' 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.952 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.210 [2024-07-23 10:38:48.499463] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:00.210 [2024-07-23 10:38:48.499569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.210 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.210 [2024-07-23 10:38:48.564216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.210 [2024-07-23 10:38:48.653440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.210 [2024-07-23 10:38:48.653507] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.210 [2024-07-23 10:38:48.653525] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.210 [2024-07-23 10:38:48.653539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.210 [2024-07-23 10:38:48.653551] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.210 [2024-07-23 10:38:48.653649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.210 [2024-07-23 10:38:48.653751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.210 [2024-07-23 10:38:48.653836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.210 [2024-07-23 10:38:48.653840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.470 [2024-07-23 10:38:48.799148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.470 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.470 Malloc0 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 [2024-07-23 10:38:48.849490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:00.471 test case1: single bdev can't be used in multiple subsystems 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 [2024-07-23 10:38:48.873338] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:00.471 [2024-07-23 10:38:48.873370] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:00.471 [2024-07-23 10:38:48.873386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.471 request: 00:18:00.471 { 00:18:00.471 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:00.471 "namespace": { 00:18:00.471 "bdev_name": "Malloc0", 00:18:00.471 "no_auto_visible": false 00:18:00.471 }, 00:18:00.471 "method": "nvmf_subsystem_add_ns", 00:18:00.471 "req_id": 1 00:18:00.471 } 00:18:00.471 Got JSON-RPC error response 00:18:00.471 response: 00:18:00.471 { 00:18:00.471 "code": -32602, 00:18:00.471 "message": "Invalid parameters" 00:18:00.471 } 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:00.471 Adding namespace failed - expected result. 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:00.471 test case2: host connect to nvmf target in multiple paths 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.471 [2024-07-23 10:38:48.881444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.471 10:38:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.040 10:38:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:01.609 10:38:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.609 10:38:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:01.609 10:38:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.609 10:38:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:01.609 10:38:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:03.514 10:38:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:03.514 [global] 00:18:03.514 thread=1 00:18:03.514 invalidate=1 00:18:03.514 rw=write 00:18:03.514 time_based=1 00:18:03.514 runtime=1 00:18:03.514 ioengine=libaio 00:18:03.514 direct=1 00:18:03.514 bs=4096 00:18:03.514 iodepth=1 00:18:03.514 norandommap=0 00:18:03.514 numjobs=1 00:18:03.514 00:18:03.514 verify_dump=1 00:18:03.514 verify_backlog=512 00:18:03.514 verify_state_save=0 00:18:03.514 do_verify=1 00:18:03.514 verify=crc32c-intel 00:18:03.514 [job0] 00:18:03.514 filename=/dev/nvme0n1 00:18:03.514 Could not set queue depth (nvme0n1) 00:18:03.774 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:03.774 fio-3.35 00:18:03.774 Starting 1 thread 00:18:04.715 00:18:04.715 job0: (groupid=0, jobs=1): err= 0: pid=3823395: Tue Jul 23 10:38:53 2024 00:18:04.715 read: IOPS=144, BW=579KiB/s (593kB/s)(580KiB/1001msec) 00:18:04.715 slat (nsec): min=6326, max=44161, avg=11049.61, stdev=8543.93 00:18:04.715 clat (usec): min=199, max=41118, avg=5849.87, stdev=14094.10 00:18:04.715 lat (usec): min=207, max=41132, avg=5860.92, stdev=14101.59 00:18:04.715 clat percentiles (usec): 00:18:04.715 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:18:04.715 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:18:04.715 | 70.00th=[ 243], 80.00th=[ 273], 90.00th=[41157], 95.00th=[41157], 00:18:04.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:04.715 | 99.99th=[41157] 00:18:04.715 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:04.715 slat (usec): min=9, max=31765, avg=83.95, stdev=1402.91 00:18:04.715 clat (usec): min=158, max=394, avg=203.34, stdev=34.57 00:18:04.715 lat (usec): min=167, max=32142, avg=287.29, stdev=1411.00 00:18:04.715 clat percentiles (usec): 00:18:04.715 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:18:04.715 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:18:04.715 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 281], 00:18:04.715 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 396], 99.95th=[ 396], 00:18:04.715 | 99.99th=[ 396] 00:18:04.715 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:04.715 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:04.715 lat (usec) : 250=87.67%, 500=9.28% 00:18:04.715 lat (msec) : 50=3.04% 00:18:04.715 cpu : usr=1.00%, sys=1.60%, ctx=659, majf=0, minf=1 00:18:04.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.715 issued rwts: total=145,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:04.715 00:18:04.715 Run status group 0 (all jobs): 00:18:04.715 READ: bw=579KiB/s (593kB/s), 579KiB/s-579KiB/s (593kB/s-593kB/s), io=580KiB (594kB), run=1001-1001msec 00:18:04.715 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:18:04.715 00:18:04.715 Disk stats (read/write): 00:18:04.715 nvme0n1: ios=44/512, merge=0/0, ticks=1703/92, in_queue=1795, util=98.80% 00:18:04.715 10:38:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.972 rmmod nvme_tcp 00:18:04.972 rmmod nvme_fabrics 00:18:04.972 rmmod nvme_keyring 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3822904 ']' 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3822904 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3822904 ']' 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3822904 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3822904 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3822904' 00:18:04.972 killing process with pid 3822904 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3822904 00:18:04.972 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3822904 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.231 10:38:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.138 10:38:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.138 00:18:07.138 real 0m9.036s 00:18:07.138 user 0m20.264s 00:18:07.138 sys 0m2.018s 00:18:07.138 10:38:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:07.138 10:38:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 ************************************ 00:18:07.138 END TEST nvmf_nmic 00:18:07.138 ************************************ 00:18:07.396 10:38:55 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:07.397 10:38:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:07.397 10:38:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:07.397 10:38:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.397 ************************************ 00:18:07.397 START TEST nvmf_fio_target 00:18:07.397 ************************************ 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:07.397 * Looking for test storage... 00:18:07.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.397 10:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:09.302 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:09.302 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.302 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:09.303 Found net devices under 0000:08:00.0: cvl_0_0 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:09.303 Found net devices under 0000:08:00.1: cvl_0_1 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:09.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:18:09.303 00:18:09.303 --- 10.0.0.2 ping statistics --- 00:18:09.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.303 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:09.303 00:18:09.303 --- 10.0.0.1 ping statistics --- 00:18:09.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.303 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3824938 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3824938 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3824938 ']' 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:09.303 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.303 [2024-07-23 10:38:57.577886] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:09.303 [2024-07-23 10:38:57.577984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.303 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.303 [2024-07-23 10:38:57.642064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.303 [2024-07-23 10:38:57.729940] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.303 [2024-07-23 10:38:57.730004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.303 [2024-07-23 10:38:57.730028] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.303 [2024-07-23 10:38:57.730048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.303 [2024-07-23 10:38:57.730066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.303 [2024-07-23 10:38:57.730184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.303 [2024-07-23 10:38:57.730266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.303 [2024-07-23 10:38:57.730319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.303 [2024-07-23 10:38:57.730327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.561 10:38:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:09.819 [2024-07-23 10:38:58.136895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.819 10:38:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.076 10:38:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:10.076 10:38:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.334 10:38:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:10.334 10:38:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.901 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:10.901 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:11.159 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:11.159 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:11.159 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:11.417 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:11.417 10:38:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:11.984 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:11.984 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:11.984 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:11.984 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:12.242 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:12.499 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:12.499 10:39:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.759 10:39:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:12.759 10:39:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.044 10:39:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.326 [2024-07-23 10:39:01.668957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.326 10:39:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:13.584 10:39:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:13.844 10:39:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:14.411 10:39:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:16.317 10:39:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:16.317 [global] 00:18:16.317 thread=1 00:18:16.317 invalidate=1 00:18:16.317 rw=write 00:18:16.317 time_based=1 00:18:16.317 runtime=1 00:18:16.317 ioengine=libaio 00:18:16.317 direct=1 00:18:16.317 bs=4096 00:18:16.317 iodepth=1 00:18:16.317 norandommap=0 00:18:16.317 numjobs=1 00:18:16.317 00:18:16.317 verify_dump=1 00:18:16.317 verify_backlog=512 00:18:16.317 verify_state_save=0 00:18:16.317 do_verify=1 00:18:16.317 verify=crc32c-intel 00:18:16.317 [job0] 00:18:16.317 filename=/dev/nvme0n1 00:18:16.317 [job1] 00:18:16.317 filename=/dev/nvme0n2 00:18:16.317 [job2] 00:18:16.317 filename=/dev/nvme0n3 00:18:16.317 [job3] 00:18:16.317 filename=/dev/nvme0n4 00:18:16.317 Could not set queue depth (nvme0n1) 00:18:16.317 Could not set queue depth (nvme0n2) 00:18:16.317 Could not set queue depth (nvme0n3) 00:18:16.317 Could not set queue depth (nvme0n4) 00:18:16.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.577 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.577 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.577 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.577 fio-3.35 00:18:16.577 Starting 4 threads 00:18:17.958 00:18:17.958 job0: (groupid=0, jobs=1): err= 0: pid=3825850: Tue Jul 23 10:39:06 2024 00:18:17.958 read: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec) 00:18:17.958 slat (nsec): min=6126, max=61139, avg=12963.47, stdev=4852.58 00:18:17.958 clat (usec): min=203, max=1189, avg=253.34, stdev=36.17 00:18:17.958 lat (usec): min=211, max=1195, avg=266.30, stdev=36.41 00:18:17.958 clat percentiles (usec): 00:18:17.958 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:18:17.958 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:18:17.958 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 293], 00:18:17.958 | 99.00th=[ 383], 99.50th=[ 453], 99.90th=[ 742], 99.95th=[ 1188], 00:18:17.958 | 99.99th=[ 1188] 00:18:17.958 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:17.958 slat (nsec): min=7829, max=49444, avg=18796.01, stdev=4997.83 00:18:17.958 clat (usec): min=137, max=2197, avg=213.67, stdev=66.94 00:18:17.958 lat (usec): min=146, max=2218, avg=232.46, stdev=68.76 00:18:17.958 clat percentiles (usec): 00:18:17.958 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 178], 00:18:17.958 | 30.00th=[ 186], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 217], 00:18:17.958 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 273], 95.00th=[ 310], 00:18:17.958 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 865], 99.95th=[ 1123], 00:18:17.958 | 99.99th=[ 2212] 00:18:17.958 bw ( KiB/s): min= 8192, max= 8192, per=51.45%, avg=8192.00, stdev= 0.00, samples=1 00:18:17.958 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:17.958 lat (usec) : 250=72.23%, 500=27.64%, 750=0.03%, 1000=0.03% 00:18:17.958 lat (msec) : 2=0.05%, 4=0.03% 00:18:17.958 cpu : usr=5.30%, sys=8.60%, ctx=3947, majf=0, minf=1 00:18:17.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.958 issued rwts: total=1899,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.958 job1: (groupid=0, jobs=1): err= 0: pid=3825851: Tue Jul 23 10:39:06 2024 00:18:17.958 read: IOPS=848, BW=3393KiB/s (3474kB/s)(3396KiB/1001msec) 00:18:17.958 slat (nsec): min=6434, max=38255, avg=12400.59, stdev=6192.14 00:18:17.958 clat (usec): min=198, max=40991, avg=864.41, stdev=4823.40 00:18:17.958 lat (usec): min=207, max=41004, avg=876.81, stdev=4825.08 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:18:17.959 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 260], 60.00th=[ 277], 00:18:17.959 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 400], 00:18:17.959 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:18:17.959 | 99.99th=[41157] 00:18:17.959 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:17.959 slat (nsec): min=8863, max=48968, avg=20135.30, stdev=5117.59 00:18:17.959 clat (usec): min=156, max=1504, avg=220.89, stdev=67.01 00:18:17.959 lat (usec): min=165, max=1531, avg=241.03, stdev=68.25 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:18:17.959 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 219], 00:18:17.959 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 302], 00:18:17.959 | 99.00th=[ 355], 99.50th=[ 392], 99.90th=[ 1045], 99.95th=[ 1500], 00:18:17.959 | 99.99th=[ 1500] 00:18:17.959 bw ( KiB/s): min= 8192, max= 8192, per=51.45%, avg=8192.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=66.79%, 500=32.19%, 750=0.11%, 1000=0.05% 00:18:17.959 lat (msec) : 2=0.16%, 50=0.69% 00:18:17.959 cpu : usr=3.40%, sys=3.40%, ctx=1873, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=849,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 job2: (groupid=0, jobs=1): err= 0: pid=3825852: Tue Jul 23 10:39:06 2024 00:18:17.959 read: IOPS=99, BW=397KiB/s (406kB/s)(408KiB/1029msec) 00:18:17.959 slat (nsec): min=13954, max=35326, avg=19110.13, stdev=6189.70 00:18:17.959 clat (usec): min=263, max=41308, avg=8676.75, stdev=16528.10 00:18:17.959 lat (usec): min=278, max=41326, avg=8695.86, stdev=16531.85 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:18:17.959 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:18:17.959 | 70.00th=[ 326], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:17.959 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:17.959 | 99.99th=[41157] 00:18:17.959 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:17.959 slat (nsec): min=8161, max=62432, avg=22111.23, stdev=5188.38 00:18:17.959 clat (usec): min=196, max=423, avg=248.95, stdev=35.90 00:18:17.959 lat (usec): min=219, max=446, avg=271.06, stdev=36.02 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 223], 00:18:17.959 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:18:17.959 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 322], 00:18:17.959 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 424], 99.95th=[ 424], 00:18:17.959 | 99.99th=[ 424] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=54.23%, 500=42.35% 00:18:17.959 lat (msec) : 50=3.42% 00:18:17.959 cpu : usr=0.97%, sys=1.46%, ctx=616, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=102,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 job3: (groupid=0, jobs=1): err= 0: pid=3825854: Tue Jul 23 10:39:06 2024 00:18:17.959 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:18:17.959 slat (nsec): min=14477, max=35757, avg=25108.68, stdev=8479.60 00:18:17.959 clat (usec): min=340, max=41078, avg=39116.54, stdev=8660.97 00:18:17.959 lat (usec): min=359, max=41101, avg=39141.65, stdev=8662.36 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 343], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:17.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:17.959 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:17.959 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:17.959 | 99.99th=[41157] 00:18:17.959 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:18:17.959 slat (nsec): min=10353, max=61406, avg=23844.99, stdev=5993.55 00:18:17.959 clat (usec): min=202, max=764, avg=249.28, stdev=45.26 00:18:17.959 lat (usec): min=218, max=796, avg=273.12, stdev=47.45 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:18:17.959 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:18:17.959 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 302], 95.00th=[ 338], 00:18:17.959 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 766], 99.95th=[ 766], 00:18:17.959 | 99.99th=[ 766] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=69.29%, 500=26.59%, 1000=0.19% 00:18:17.959 lat (msec) : 50=3.93% 00:18:17.959 cpu : usr=1.20%, sys=1.20%, ctx=535, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 00:18:17.959 Run status group 0 (all jobs): 00:18:17.959 READ: bw=10.9MiB/s (11.4MB/s), 87.6KiB/s-7588KiB/s (89.8kB/s-7771kB/s), io=11.2MiB (11.8MB), run=1001-1029msec 00:18:17.959 WRITE: bw=15.5MiB/s (16.3MB/s), 1990KiB/s-8184KiB/s (2038kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:18:17.959 00:18:17.959 Disk stats (read/write): 00:18:17.959 nvme0n1: ios=1586/1765, merge=0/0, ticks=393/373, in_queue=766, util=86.87% 00:18:17.959 nvme0n2: ios=604/1024, merge=0/0, ticks=655/217, in_queue=872, util=91.15% 00:18:17.959 nvme0n3: ios=154/512, merge=0/0, ticks=965/107, in_queue=1072, util=93.53% 00:18:17.959 nvme0n4: ios=75/512, merge=0/0, ticks=911/127, in_queue=1038, util=94.21% 00:18:17.959 10:39:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:17.959 [global] 00:18:17.959 thread=1 00:18:17.959 invalidate=1 00:18:17.959 rw=randwrite 00:18:17.959 time_based=1 00:18:17.959 runtime=1 00:18:17.959 ioengine=libaio 00:18:17.959 direct=1 00:18:17.959 bs=4096 00:18:17.959 iodepth=1 00:18:17.959 norandommap=0 00:18:17.959 numjobs=1 00:18:17.959 00:18:17.959 verify_dump=1 00:18:17.959 verify_backlog=512 00:18:17.959 verify_state_save=0 00:18:17.959 do_verify=1 00:18:17.959 verify=crc32c-intel 00:18:17.959 [job0] 00:18:17.959 filename=/dev/nvme0n1 00:18:17.959 [job1] 00:18:17.959 filename=/dev/nvme0n2 00:18:17.959 [job2] 00:18:17.959 filename=/dev/nvme0n3 00:18:17.959 [job3] 00:18:17.959 filename=/dev/nvme0n4 00:18:17.959 Could not set queue depth (nvme0n1) 00:18:17.959 Could not set queue depth (nvme0n2) 00:18:17.959 Could not set queue depth (nvme0n3) 00:18:17.959 Could not set queue depth (nvme0n4) 00:18:17.959 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.959 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.959 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.959 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.959 fio-3.35 00:18:17.959 Starting 4 threads 00:18:19.336 00:18:19.336 job0: (groupid=0, jobs=1): err= 0: pid=3826048: Tue Jul 23 10:39:07 2024 00:18:19.336 read: IOPS=62, BW=250KiB/s (257kB/s)(252KiB/1006msec) 00:18:19.336 slat (nsec): min=7654, max=38420, avg=26302.60, stdev=8045.42 00:18:19.337 clat (usec): min=268, max=41418, avg=13754.73, stdev=19075.62 00:18:19.337 lat (usec): min=299, max=41449, avg=13781.03, stdev=19073.97 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 379], 00:18:19.337 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 424], 60.00th=[ 465], 00:18:19.337 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:18:19.337 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:19.337 | 99.99th=[41681] 00:18:19.337 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:19.337 slat (nsec): min=8201, max=24337, avg=9739.03, stdev=1905.84 00:18:19.337 clat (usec): min=158, max=823, avg=254.86, stdev=56.24 00:18:19.337 lat (usec): min=167, max=835, avg=264.60, stdev=56.21 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 208], 00:18:19.337 | 30.00th=[ 229], 40.00th=[ 243], 50.00th=[ 260], 60.00th=[ 281], 00:18:19.337 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:18:19.337 | 99.00th=[ 392], 99.50th=[ 603], 99.90th=[ 824], 99.95th=[ 824], 00:18:19.337 | 99.99th=[ 824] 00:18:19.337 bw ( KiB/s): min= 4096, max= 4096, per=18.94%, avg=4096.00, stdev= 0.00, samples=1 00:18:19.337 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:19.337 lat (usec) : 250=42.78%, 500=52.70%, 750=0.70%, 1000=0.17% 00:18:19.337 lat (msec) : 50=3.65% 00:18:19.337 cpu : usr=0.30%, sys=1.00%, ctx=576, majf=0, minf=1 00:18:19.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 issued rwts: total=63,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.337 job1: (groupid=0, jobs=1): err= 0: pid=3826055: Tue Jul 23 10:39:07 2024 00:18:19.337 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:19.337 slat (nsec): min=6068, max=58134, avg=13543.48, stdev=4676.02 00:18:19.337 clat (usec): min=225, max=672, avg=346.21, stdev=44.31 00:18:19.337 lat (usec): min=233, max=687, avg=359.75, stdev=44.48 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 273], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:18:19.337 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:18:19.337 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 420], 00:18:19.337 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 676], 00:18:19.337 | 99.99th=[ 676] 00:18:19.337 write: IOPS=1677, BW=6709KiB/s (6870kB/s)(6716KiB/1001msec); 0 zone resets 00:18:19.337 slat (nsec): min=7748, max=52014, avg=14297.46, stdev=5170.45 00:18:19.337 clat (usec): min=179, max=455, avg=243.96, stdev=34.21 00:18:19.337 lat (usec): min=192, max=464, avg=258.26, stdev=31.01 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:18:19.337 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 253], 00:18:19.337 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:18:19.337 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 429], 99.95th=[ 457], 00:18:19.337 | 99.99th=[ 457] 00:18:19.337 bw ( KiB/s): min= 8192, max= 8192, per=37.87%, avg=8192.00, stdev= 0.00, samples=1 00:18:19.337 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:19.337 lat (usec) : 250=30.02%, 500=69.11%, 750=0.87% 00:18:19.337 cpu : usr=4.20%, sys=5.90%, ctx=3215, majf=0, minf=1 00:18:19.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 issued rwts: total=1536,1679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.337 job2: (groupid=0, jobs=1): err= 0: pid=3826088: Tue Jul 23 10:39:07 2024 00:18:19.337 read: IOPS=1136, BW=4547KiB/s (4657kB/s)(4552KiB/1001msec) 00:18:19.337 slat (nsec): min=6300, max=42351, avg=12912.19, stdev=5147.26 00:18:19.337 clat (usec): min=241, max=41983, avg=561.91, stdev=3238.28 00:18:19.337 lat (usec): min=248, max=42001, avg=574.83, stdev=3239.07 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:18:19.337 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:18:19.337 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 416], 00:18:19.337 | 99.00th=[ 873], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:19.337 | 99.99th=[42206] 00:18:19.337 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:19.337 slat (nsec): min=8174, max=49059, avg=14193.77, stdev=5927.02 00:18:19.337 clat (usec): min=164, max=408, avg=204.27, stdev=25.78 00:18:19.337 lat (usec): min=173, max=417, avg=218.47, stdev=29.36 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:18:19.337 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:18:19.337 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 245], 00:18:19.337 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 351], 99.95th=[ 408], 00:18:19.337 | 99.99th=[ 408] 00:18:19.337 bw ( KiB/s): min= 4096, max= 4096, per=18.94%, avg=4096.00, stdev= 0.00, samples=1 00:18:19.337 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:19.337 lat (usec) : 250=55.83%, 500=42.52%, 750=1.12%, 1000=0.26% 00:18:19.337 lat (msec) : 50=0.26% 00:18:19.337 cpu : usr=2.10%, sys=5.70%, ctx=2676, majf=0, minf=1 00:18:19.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 issued rwts: total=1138,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.337 job3: (groupid=0, jobs=1): err= 0: pid=3826095: Tue Jul 23 10:39:07 2024 00:18:19.337 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:19.337 slat (nsec): min=6025, max=67867, avg=13694.88, stdev=4558.23 00:18:19.337 clat (usec): min=246, max=616, avg=340.33, stdev=34.25 00:18:19.337 lat (usec): min=255, max=630, avg=354.02, stdev=34.79 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:18:19.337 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:18:19.337 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 388], 00:18:19.337 | 99.00th=[ 474], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 619], 00:18:19.337 | 99.99th=[ 619] 00:18:19.337 write: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec); 0 zone resets 00:18:19.337 slat (nsec): min=7817, max=40434, avg=14303.24, stdev=5567.69 00:18:19.337 clat (usec): min=176, max=524, avg=244.22, stdev=35.65 00:18:19.337 lat (usec): min=185, max=535, avg=258.53, stdev=32.46 00:18:19.337 clat percentiles (usec): 00:18:19.337 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:18:19.337 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 253], 00:18:19.337 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:18:19.337 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 445], 99.95th=[ 529], 00:18:19.337 | 99.99th=[ 529] 00:18:19.337 bw ( KiB/s): min= 8192, max= 8192, per=37.87%, avg=8192.00, stdev= 0.00, samples=1 00:18:19.337 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:19.337 lat (usec) : 250=30.72%, 500=68.98%, 750=0.31% 00:18:19.337 cpu : usr=3.10%, sys=7.00%, ctx=3249, majf=0, minf=1 00:18:19.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.337 issued rwts: total=1536,1713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.337 00:18:19.337 Run status group 0 (all jobs): 00:18:19.337 READ: bw=16.6MiB/s (17.4MB/s), 250KiB/s-6138KiB/s (257kB/s-6285kB/s), io=16.7MiB (17.5MB), run=1001-1006msec 00:18:19.337 WRITE: bw=21.1MiB/s (22.1MB/s), 2036KiB/s-6845KiB/s (2085kB/s-7009kB/s), io=21.2MiB (22.3MB), run=1001-1006msec 00:18:19.337 00:18:19.337 Disk stats (read/write): 00:18:19.337 nvme0n1: ios=108/512, merge=0/0, ticks=1002/129, in_queue=1131, util=98.00% 00:18:19.337 nvme0n2: ios=1235/1536, merge=0/0, ticks=419/368, in_queue=787, util=86.60% 00:18:19.337 nvme0n3: ios=1053/1089, merge=0/0, ticks=856/210, in_queue=1066, util=98.33% 00:18:19.337 nvme0n4: ios=1253/1536, merge=0/0, ticks=414/354, in_queue=768, util=89.60% 00:18:19.337 10:39:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:19.337 [global] 00:18:19.337 thread=1 00:18:19.337 invalidate=1 00:18:19.337 rw=write 00:18:19.337 time_based=1 00:18:19.337 runtime=1 00:18:19.337 ioengine=libaio 00:18:19.337 direct=1 00:18:19.337 bs=4096 00:18:19.337 iodepth=128 00:18:19.337 norandommap=0 00:18:19.337 numjobs=1 00:18:19.337 00:18:19.337 verify_dump=1 00:18:19.337 verify_backlog=512 00:18:19.337 verify_state_save=0 00:18:19.337 do_verify=1 00:18:19.337 verify=crc32c-intel 00:18:19.337 [job0] 00:18:19.337 filename=/dev/nvme0n1 00:18:19.337 [job1] 00:18:19.337 filename=/dev/nvme0n2 00:18:19.337 [job2] 00:18:19.337 filename=/dev/nvme0n3 00:18:19.337 [job3] 00:18:19.337 filename=/dev/nvme0n4 00:18:19.337 Could not set queue depth (nvme0n1) 00:18:19.337 Could not set queue depth (nvme0n2) 00:18:19.337 Could not set queue depth (nvme0n3) 00:18:19.337 Could not set queue depth (nvme0n4) 00:18:19.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.337 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.337 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.337 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.337 fio-3.35 00:18:19.337 Starting 4 threads 00:18:20.713 00:18:20.714 job0: (groupid=0, jobs=1): err= 0: pid=3826303: Tue Jul 23 10:39:08 2024 00:18:20.714 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:18:20.714 slat (usec): min=2, max=29282, avg=113.53, stdev=762.53 00:18:20.714 clat (usec): min=8537, max=41752, avg=14987.22, stdev=6006.81 00:18:20.714 lat (usec): min=8550, max=41760, avg=15100.75, stdev=6048.40 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:18:20.714 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12518], 60.00th=[12911], 00:18:20.714 | 70.00th=[14222], 80.00th=[18744], 90.00th=[21365], 95.00th=[28705], 00:18:20.714 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:18:20.714 | 99.99th=[41681] 00:18:20.714 write: IOPS=4315, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1003msec); 0 zone resets 00:18:20.714 slat (usec): min=4, max=8796, avg=115.45, stdev=614.85 00:18:20.714 clat (usec): min=2168, max=41765, avg=15155.99, stdev=6386.51 00:18:20.714 lat (usec): min=2841, max=41793, avg=15271.43, stdev=6424.27 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11469], 00:18:20.714 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13566], 00:18:20.714 | 70.00th=[16319], 80.00th=[19006], 90.00th=[23462], 95.00th=[30016], 00:18:20.714 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:18:20.714 | 99.99th=[41681] 00:18:20.714 bw ( KiB/s): min=13632, max=19976, per=28.68%, avg=16804.00, stdev=4485.89, samples=2 00:18:20.714 iops : min= 3408, max= 4994, avg=4201.00, stdev=1121.47, samples=2 00:18:20.714 lat (msec) : 4=0.20%, 10=7.12%, 20=78.41%, 50=14.27% 00:18:20.714 cpu : usr=4.49%, sys=6.79%, ctx=412, majf=0, minf=9 00:18:20.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:20.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.714 issued rwts: total=4096,4328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.714 job1: (groupid=0, jobs=1): err= 0: pid=3826304: Tue Jul 23 10:39:08 2024 00:18:20.714 read: IOPS=3267, BW=12.8MiB/s (13.4MB/s)(13.3MiB/1042msec) 00:18:20.714 slat (usec): min=3, max=13985, avg=134.14, stdev=823.79 00:18:20.714 clat (usec): min=8896, max=57524, avg=18735.40, stdev=8334.42 00:18:20.714 lat (usec): min=8903, max=57530, avg=18869.54, stdev=8357.72 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 9110], 5.00th=[11731], 10.00th=[11863], 20.00th=[13566], 00:18:20.714 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16319], 60.00th=[17695], 00:18:20.714 | 70.00th=[18744], 80.00th=[21103], 90.00th=[29230], 95.00th=[39060], 00:18:20.714 | 99.00th=[52691], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:18:20.714 | 99.99th=[57410] 00:18:20.714 write: IOPS=3439, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1042msec); 0 zone resets 00:18:20.714 slat (usec): min=5, max=21907, avg=142.43, stdev=875.33 00:18:20.714 clat (usec): min=8484, max=48242, avg=18776.41, stdev=7932.71 00:18:20.714 lat (usec): min=8493, max=48252, avg=18918.85, stdev=7999.11 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11338], 20.00th=[12256], 00:18:20.714 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14615], 60.00th=[18744], 00:18:20.714 | 70.00th=[23462], 80.00th=[24773], 90.00th=[32900], 95.00th=[33817], 00:18:20.714 | 99.00th=[42206], 99.50th=[47973], 99.90th=[47973], 99.95th=[48497], 00:18:20.714 | 99.99th=[48497] 00:18:20.714 bw ( KiB/s): min=13064, max=15608, per=24.47%, avg=14336.00, stdev=1798.88, samples=2 00:18:20.714 iops : min= 3266, max= 3902, avg=3584.00, stdev=449.72, samples=2 00:18:20.714 lat (msec) : 10=2.43%, 20=67.42%, 50=29.26%, 100=0.89% 00:18:20.714 cpu : usr=4.13%, sys=6.34%, ctx=311, majf=0, minf=19 00:18:20.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:20.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.714 issued rwts: total=3405,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.714 job2: (groupid=0, jobs=1): err= 0: pid=3826305: Tue Jul 23 10:39:08 2024 00:18:20.714 read: IOPS=3704, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1002msec) 00:18:20.714 slat (usec): min=3, max=10860, avg=127.41, stdev=678.14 00:18:20.714 clat (usec): min=524, max=31812, avg=15421.50, stdev=4095.64 00:18:20.714 lat (usec): min=2547, max=31819, avg=15548.91, stdev=4115.44 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 5604], 5.00th=[10421], 10.00th=[11469], 20.00th=[13304], 00:18:20.714 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14877], 60.00th=[15139], 00:18:20.714 | 70.00th=[15926], 80.00th=[17433], 90.00th=[19268], 95.00th=[23987], 00:18:20.714 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:18:20.714 | 99.99th=[31851] 00:18:20.714 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:20.714 slat (usec): min=3, max=22229, avg=119.40, stdev=729.76 00:18:20.714 clat (usec): min=6719, max=56275, avg=16777.94, stdev=6237.71 00:18:20.714 lat (usec): min=6727, max=56292, avg=16897.34, stdev=6282.51 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 7832], 5.00th=[11731], 10.00th=[12911], 20.00th=[13698], 00:18:20.714 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:18:20.714 | 70.00th=[15795], 80.00th=[18220], 90.00th=[25035], 95.00th=[30016], 00:18:20.714 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:18:20.714 | 99.99th=[56361] 00:18:20.714 bw ( KiB/s): min=16384, max=16384, per=27.96%, avg=16384.00, stdev= 0.00, samples=2 00:18:20.714 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:20.714 lat (usec) : 750=0.01% 00:18:20.714 lat (msec) : 4=0.08%, 10=2.37%, 20=85.13%, 50=12.40%, 100=0.01% 00:18:20.714 cpu : usr=4.50%, sys=8.49%, ctx=490, majf=0, minf=9 00:18:20.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:20.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.714 issued rwts: total=3712,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.714 job3: (groupid=0, jobs=1): err= 0: pid=3826306: Tue Jul 23 10:39:08 2024 00:18:20.714 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:18:20.714 slat (usec): min=3, max=25905, avg=189.11, stdev=1234.42 00:18:20.714 clat (usec): min=6644, max=64501, avg=23813.88, stdev=12975.01 00:18:20.714 lat (usec): min=6650, max=64530, avg=24002.99, stdev=13034.96 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 6718], 5.00th=[12911], 10.00th=[13566], 20.00th=[14353], 00:18:20.714 | 30.00th=[15795], 40.00th=[16450], 50.00th=[18744], 60.00th=[22152], 00:18:20.714 | 70.00th=[26608], 80.00th=[28967], 90.00th=[46924], 95.00th=[57410], 00:18:20.714 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64226], 99.95th=[64750], 00:18:20.714 | 99.99th=[64750] 00:18:20.714 write: IOPS=3248, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1002msec); 0 zone resets 00:18:20.714 slat (usec): min=4, max=14757, avg=116.48, stdev=690.28 00:18:20.714 clat (usec): min=575, max=48091, avg=16557.12, stdev=6643.07 00:18:20.714 lat (usec): min=4080, max=48122, avg=16673.60, stdev=6676.15 00:18:20.714 clat percentiles (usec): 00:18:20.714 | 1.00th=[ 4424], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[12125], 00:18:20.714 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14484], 60.00th=[15533], 00:18:20.714 | 70.00th=[16909], 80.00th=[20841], 90.00th=[24773], 95.00th=[26084], 00:18:20.714 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:18:20.714 | 99.99th=[47973] 00:18:20.714 bw ( KiB/s): min=12288, max=12736, per=21.35%, avg=12512.00, stdev=316.78, samples=2 00:18:20.714 iops : min= 3072, max= 3184, avg=3128.00, stdev=79.20, samples=2 00:18:20.714 lat (usec) : 750=0.02% 00:18:20.714 lat (msec) : 10=3.45%, 20=63.49%, 50=29.49%, 100=3.56% 00:18:20.714 cpu : usr=4.00%, sys=5.89%, ctx=324, majf=0, minf=15 00:18:20.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:20.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.714 issued rwts: total=3072,3255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.714 00:18:20.714 Run status group 0 (all jobs): 00:18:20.714 READ: bw=53.6MiB/s (56.2MB/s), 12.0MiB/s-16.0MiB/s (12.6MB/s-16.7MB/s), io=55.8MiB (58.5MB), run=1002-1042msec 00:18:20.714 WRITE: bw=57.2MiB/s (60.0MB/s), 12.7MiB/s-16.9MiB/s (13.3MB/s-17.7MB/s), io=59.6MiB (62.5MB), run=1002-1042msec 00:18:20.714 00:18:20.714 Disk stats (read/write): 00:18:20.714 nvme0n1: ios=3634/3711, merge=0/0, ticks=17977/16823, in_queue=34800, util=86.07% 00:18:20.714 nvme0n2: ios=2668/3072, merge=0/0, ticks=21036/24379, in_queue=45415, util=98.17% 00:18:20.714 nvme0n3: ios=3072/3514, merge=0/0, ticks=17989/21757, in_queue=39746, util=88.44% 00:18:20.714 nvme0n4: ios=2609/2668, merge=0/0, ticks=22080/20091, in_queue=42171, util=97.06% 00:18:20.714 10:39:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:20.714 [global] 00:18:20.714 thread=1 00:18:20.714 invalidate=1 00:18:20.714 rw=randwrite 00:18:20.714 time_based=1 00:18:20.714 runtime=1 00:18:20.714 ioengine=libaio 00:18:20.714 direct=1 00:18:20.714 bs=4096 00:18:20.714 iodepth=128 00:18:20.714 norandommap=0 00:18:20.714 numjobs=1 00:18:20.714 00:18:20.714 verify_dump=1 00:18:20.714 verify_backlog=512 00:18:20.714 verify_state_save=0 00:18:20.714 do_verify=1 00:18:20.714 verify=crc32c-intel 00:18:20.714 [job0] 00:18:20.714 filename=/dev/nvme0n1 00:18:20.714 [job1] 00:18:20.714 filename=/dev/nvme0n2 00:18:20.714 [job2] 00:18:20.714 filename=/dev/nvme0n3 00:18:20.714 [job3] 00:18:20.714 filename=/dev/nvme0n4 00:18:20.714 Could not set queue depth (nvme0n1) 00:18:20.714 Could not set queue depth (nvme0n2) 00:18:20.714 Could not set queue depth (nvme0n3) 00:18:20.714 Could not set queue depth (nvme0n4) 00:18:20.714 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.714 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.715 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.715 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.715 fio-3.35 00:18:20.715 Starting 4 threads 00:18:22.091 00:18:22.091 job0: (groupid=0, jobs=1): err= 0: pid=3826882: Tue Jul 23 10:39:10 2024 00:18:22.091 read: IOPS=3687, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:18:22.091 slat (usec): min=3, max=10152, avg=127.02, stdev=708.04 00:18:22.091 clat (usec): min=1421, max=38941, avg=15726.10, stdev=5472.43 00:18:22.091 lat (usec): min=5374, max=38950, avg=15853.12, stdev=5509.10 00:18:22.091 clat percentiles (usec): 00:18:22.091 | 1.00th=[ 5669], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:18:22.091 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13566], 60.00th=[14615], 00:18:22.091 | 70.00th=[16909], 80.00th=[19792], 90.00th=[23987], 95.00th=[28443], 00:18:22.091 | 99.00th=[34341], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:18:22.091 | 99.99th=[39060] 00:18:22.091 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:18:22.091 slat (usec): min=3, max=13066, avg=123.01, stdev=626.07 00:18:22.091 clat (usec): min=6961, max=36846, avg=16753.61, stdev=6000.25 00:18:22.091 lat (usec): min=6970, max=36854, avg=16876.61, stdev=6032.79 00:18:22.091 clat percentiles (usec): 00:18:22.091 | 1.00th=[ 7308], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11600], 00:18:22.091 | 30.00th=[12256], 40.00th=[13304], 50.00th=[15270], 60.00th=[17433], 00:18:22.091 | 70.00th=[20317], 80.00th=[21890], 90.00th=[25035], 95.00th=[28181], 00:18:22.091 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:18:22.091 | 99.99th=[36963] 00:18:22.092 bw ( KiB/s): min=16304, max=16384, per=26.73%, avg=16344.00, stdev=56.57, samples=2 00:18:22.092 iops : min= 4076, max= 4096, avg=4086.00, stdev=14.14, samples=2 00:18:22.092 lat (msec) : 2=0.01%, 10=4.87%, 20=69.08%, 50=26.03% 00:18:22.092 cpu : usr=3.59%, sys=5.28%, ctx=371, majf=0, minf=1 00:18:22.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:22.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.092 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.092 job1: (groupid=0, jobs=1): err= 0: pid=3826883: Tue Jul 23 10:39:10 2024 00:18:22.092 read: IOPS=3184, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:18:22.092 slat (usec): min=3, max=12397, avg=145.41, stdev=742.08 00:18:22.092 clat (usec): min=1050, max=50370, avg=18524.09, stdev=7688.62 00:18:22.092 lat (usec): min=9503, max=53402, avg=18669.51, stdev=7713.81 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11994], 20.00th=[12649], 00:18:22.092 | 30.00th=[13304], 40.00th=[14222], 50.00th=[15664], 60.00th=[17695], 00:18:22.092 | 70.00th=[20055], 80.00th=[23725], 90.00th=[29754], 95.00th=[36439], 00:18:22.092 | 99.00th=[43779], 99.50th=[48497], 99.90th=[50070], 99.95th=[50594], 00:18:22.092 | 99.99th=[50594] 00:18:22.092 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:18:22.092 slat (usec): min=4, max=7740, avg=140.51, stdev=591.74 00:18:22.092 clat (usec): min=8665, max=43598, avg=18711.23, stdev=7524.74 00:18:22.092 lat (usec): min=8762, max=43659, avg=18851.74, stdev=7558.40 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[11469], 20.00th=[12256], 00:18:22.092 | 30.00th=[13566], 40.00th=[14353], 50.00th=[16712], 60.00th=[19530], 00:18:22.092 | 70.00th=[21627], 80.00th=[23200], 90.00th=[28181], 95.00th=[36439], 00:18:22.092 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:18:22.092 | 99.99th=[43779] 00:18:22.092 bw ( KiB/s): min=12288, max=16384, per=23.45%, avg=14336.00, stdev=2896.31, samples=2 00:18:22.092 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:18:22.092 lat (msec) : 2=0.01%, 10=1.62%, 20=64.06%, 50=34.20%, 100=0.10% 00:18:22.092 cpu : usr=3.88%, sys=6.27%, ctx=430, majf=0, minf=1 00:18:22.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:22.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.092 issued rwts: total=3200,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.092 job2: (groupid=0, jobs=1): err= 0: pid=3826898: Tue Jul 23 10:39:10 2024 00:18:22.092 read: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1003msec) 00:18:22.092 slat (usec): min=2, max=14926, avg=148.95, stdev=850.51 00:18:22.092 clat (usec): min=2520, max=35652, avg=18515.91, stdev=5675.94 00:18:22.092 lat (usec): min=2529, max=40660, avg=18664.86, stdev=5703.40 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[ 7242], 5.00th=[10945], 10.00th=[12518], 20.00th=[13698], 00:18:22.092 | 30.00th=[14222], 40.00th=[15926], 50.00th=[17695], 60.00th=[20055], 00:18:22.092 | 70.00th=[22152], 80.00th=[23462], 90.00th=[26084], 95.00th=[28443], 00:18:22.092 | 99.00th=[33162], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:18:22.092 | 99.99th=[35914] 00:18:22.092 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:18:22.092 slat (usec): min=3, max=17714, avg=125.65, stdev=741.91 00:18:22.092 clat (usec): min=7247, max=43139, avg=17116.39, stdev=4790.24 00:18:22.092 lat (usec): min=7261, max=43153, avg=17242.04, stdev=4829.54 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[13042], 20.00th=[13566], 00:18:22.092 | 30.00th=[14353], 40.00th=[15008], 50.00th=[16057], 60.00th=[17695], 00:18:22.092 | 70.00th=[18220], 80.00th=[19006], 90.00th=[23462], 95.00th=[27395], 00:18:22.092 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[40109], 00:18:22.092 | 99.99th=[43254] 00:18:22.092 bw ( KiB/s): min=14184, max=14488, per=23.45%, avg=14336.00, stdev=214.96, samples=2 00:18:22.092 iops : min= 3546, max= 3622, avg=3584.00, stdev=53.74, samples=2 00:18:22.092 lat (msec) : 4=0.35%, 10=1.74%, 20=69.25%, 50=28.66% 00:18:22.092 cpu : usr=2.30%, sys=4.89%, ctx=340, majf=0, minf=1 00:18:22.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:22.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.092 issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.092 job3: (groupid=0, jobs=1): err= 0: pid=3826899: Tue Jul 23 10:39:10 2024 00:18:22.092 read: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1002msec) 00:18:22.092 slat (usec): min=2, max=11261, avg=126.29, stdev=745.93 00:18:22.092 clat (usec): min=598, max=38376, avg=16171.97, stdev=5010.14 00:18:22.092 lat (usec): min=1555, max=38382, avg=16298.27, stdev=5024.59 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[ 4178], 5.00th=[ 8979], 10.00th=[11731], 20.00th=[13304], 00:18:22.092 | 30.00th=[13698], 40.00th=[14353], 50.00th=[14877], 60.00th=[15795], 00:18:22.092 | 70.00th=[17433], 80.00th=[21103], 90.00th=[23725], 95.00th=[25297], 00:18:22.092 | 99.00th=[27919], 99.50th=[30540], 99.90th=[38536], 99.95th=[38536], 00:18:22.092 | 99.99th=[38536] 00:18:22.092 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:22.092 slat (usec): min=3, max=12009, avg=110.43, stdev=631.17 00:18:22.092 clat (usec): min=1538, max=40490, avg=15031.40, stdev=4883.70 00:18:22.092 lat (usec): min=1555, max=40498, avg=15141.83, stdev=4898.01 00:18:22.092 clat percentiles (usec): 00:18:22.092 | 1.00th=[ 6194], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12387], 00:18:22.092 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14353], 00:18:22.092 | 70.00th=[14877], 80.00th=[16909], 90.00th=[20579], 95.00th=[25035], 00:18:22.092 | 99.00th=[34866], 99.50th=[39060], 99.90th=[40633], 99.95th=[40633], 00:18:22.092 | 99.99th=[40633] 00:18:22.092 bw ( KiB/s): min=16232, max=16536, per=26.80%, avg=16384.00, stdev=214.96, samples=2 00:18:22.092 iops : min= 4058, max= 4134, avg=4096.00, stdev=53.74, samples=2 00:18:22.092 lat (usec) : 750=0.01% 00:18:22.092 lat (msec) : 2=0.10%, 4=0.33%, 10=6.19%, 20=76.36%, 50=17.01% 00:18:22.092 cpu : usr=3.30%, sys=5.59%, ctx=334, majf=0, minf=1 00:18:22.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:22.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.092 issued rwts: total=4001,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.092 00:18:22.092 Run status group 0 (all jobs): 00:18:22.092 READ: bw=56.1MiB/s (58.8MB/s), 12.4MiB/s-15.6MiB/s (13.0MB/s-16.4MB/s), io=56.4MiB (59.1MB), run=1002-1005msec 00:18:22.092 WRITE: bw=59.7MiB/s (62.6MB/s), 13.9MiB/s-16.0MiB/s (14.6MB/s-16.7MB/s), io=60.0MiB (62.9MB), run=1002-1005msec 00:18:22.092 00:18:22.092 Disk stats (read/write): 00:18:22.092 nvme0n1: ios=3113/3584, merge=0/0, ticks=16026/18842, in_queue=34868, util=91.08% 00:18:22.092 nvme0n2: ios=3058/3072, merge=0/0, ticks=14005/12742, in_queue=26747, util=94.61% 00:18:22.092 nvme0n3: ios=2743/3072, merge=0/0, ticks=17043/16427, in_queue=33470, util=90.52% 00:18:22.092 nvme0n4: ios=3204/3584, merge=0/0, ticks=18920/17521, in_queue=36441, util=91.39% 00:18:22.092 10:39:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:22.092 10:39:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3827108 00:18:22.092 10:39:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:22.092 10:39:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:22.092 [global] 00:18:22.092 thread=1 00:18:22.092 invalidate=1 00:18:22.092 rw=read 00:18:22.092 time_based=1 00:18:22.092 runtime=10 00:18:22.092 ioengine=libaio 00:18:22.092 direct=1 00:18:22.092 bs=4096 00:18:22.092 iodepth=1 00:18:22.092 norandommap=1 00:18:22.092 numjobs=1 00:18:22.092 00:18:22.092 [job0] 00:18:22.092 filename=/dev/nvme0n1 00:18:22.092 [job1] 00:18:22.092 filename=/dev/nvme0n2 00:18:22.092 [job2] 00:18:22.092 filename=/dev/nvme0n3 00:18:22.092 [job3] 00:18:22.092 filename=/dev/nvme0n4 00:18:22.092 Could not set queue depth (nvme0n1) 00:18:22.092 Could not set queue depth (nvme0n2) 00:18:22.092 Could not set queue depth (nvme0n3) 00:18:22.092 Could not set queue depth (nvme0n4) 00:18:22.350 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.350 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.350 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.350 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.350 fio-3.35 00:18:22.350 Starting 4 threads 00:18:25.643 10:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:25.643 10:39:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:25.643 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=11755520, buflen=4096 00:18:25.643 fio: pid=3827195, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:25.643 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:25.643 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:25.643 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=41517056, buflen=4096 00:18:25.643 fio: pid=3827190, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:25.901 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:25.901 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:25.901 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8704000, buflen=4096 00:18:25.901 fio: pid=3827184, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.158 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.158 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:26.158 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18284544, buflen=4096 00:18:26.158 fio: pid=3827187, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:26.417 00:18:26.417 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3827184: Tue Jul 23 10:39:14 2024 00:18:26.417 read: IOPS=599, BW=2395KiB/s (2453kB/s)(8500KiB/3549msec) 00:18:26.417 slat (usec): min=4, max=9894, avg=14.35, stdev=214.44 00:18:26.417 clat (usec): min=189, max=42368, avg=1642.71, stdev=7332.09 00:18:26.417 lat (usec): min=199, max=42389, avg=1652.41, stdev=7334.07 00:18:26.417 clat percentiles (usec): 00:18:26.417 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 237], 00:18:26.417 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:18:26.417 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 355], 95.00th=[ 537], 00:18:26.417 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:26.417 | 99.99th=[42206] 00:18:26.417 bw ( KiB/s): min= 96, max= 8080, per=13.76%, avg=2816.00, stdev=2986.81, samples=6 00:18:26.417 iops : min= 24, max= 2020, avg=704.00, stdev=746.70, samples=6 00:18:26.417 lat (usec) : 250=53.62%, 500=40.64%, 750=2.21% 00:18:26.417 lat (msec) : 2=0.05%, 20=0.05%, 50=3.39% 00:18:26.417 cpu : usr=0.25%, sys=0.70%, ctx=2129, majf=0, minf=1 00:18:26.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.417 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3827187: Tue Jul 23 10:39:14 2024 00:18:26.417 read: IOPS=1165, BW=4661KiB/s (4773kB/s)(17.4MiB/3831msec) 00:18:26.417 slat (usec): min=4, max=16885, avg=26.82, stdev=491.09 00:18:26.417 clat (usec): min=179, max=43898, avg=823.29, stdev=4963.00 00:18:26.417 lat (usec): min=184, max=57791, avg=850.11, stdev=5046.20 00:18:26.417 clat percentiles (usec): 00:18:26.417 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:18:26.417 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:18:26.417 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 281], 95.00th=[ 297], 00:18:26.417 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:26.417 | 99.99th=[43779] 00:18:26.417 bw ( KiB/s): min= 96, max=17168, per=24.86%, avg=5086.57, stdev=8173.15, samples=7 00:18:26.417 iops : min= 24, max= 4292, avg=1271.57, stdev=2043.34, samples=7 00:18:26.417 lat (usec) : 250=83.70%, 500=14.74%, 750=0.07% 00:18:26.417 lat (msec) : 2=0.02%, 50=1.46% 00:18:26.417 cpu : usr=0.50%, sys=1.33%, ctx=4472, majf=0, minf=1 00:18:26.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 issued rwts: total=4465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.417 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3827190: Tue Jul 23 10:39:14 2024 00:18:26.417 read: IOPS=3153, BW=12.3MiB/s (12.9MB/s)(39.6MiB/3215msec) 00:18:26.417 slat (nsec): min=5210, max=58657, avg=11291.81, stdev=5117.53 00:18:26.417 clat (usec): min=208, max=42074, avg=300.87, stdev=602.09 00:18:26.417 lat (usec): min=214, max=42083, avg=312.17, stdev=602.28 00:18:26.417 clat percentiles (usec): 00:18:26.417 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:18:26.417 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 289], 00:18:26.417 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 379], 00:18:26.417 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 701], 99.95th=[ 1074], 00:18:26.417 | 99.99th=[42206] 00:18:26.417 bw ( KiB/s): min= 8800, max=14744, per=61.22%, avg=12525.33, stdev=2242.80, samples=6 00:18:26.417 iops : min= 2200, max= 3686, avg=3131.33, stdev=560.70, samples=6 00:18:26.417 lat (usec) : 250=24.35%, 500=74.82%, 750=0.75%, 1000=0.02% 00:18:26.417 lat (msec) : 2=0.03%, 20=0.01%, 50=0.02% 00:18:26.417 cpu : usr=2.05%, sys=5.91%, ctx=10139, majf=0, minf=1 00:18:26.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 issued rwts: total=10137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.417 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3827195: Tue Jul 23 10:39:14 2024 00:18:26.417 read: IOPS=968, BW=3873KiB/s (3966kB/s)(11.2MiB/2964msec) 00:18:26.417 slat (nsec): min=6518, max=43165, avg=13902.76, stdev=4538.81 00:18:26.417 clat (usec): min=216, max=45999, avg=1005.60, stdev=5230.09 00:18:26.417 lat (usec): min=224, max=46018, avg=1019.50, stdev=5230.86 00:18:26.417 clat percentiles (usec): 00:18:26.417 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 281], 00:18:26.417 | 30.00th=[ 302], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 355], 00:18:26.417 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 449], 00:18:26.417 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[44827], 00:18:26.417 | 99.99th=[45876] 00:18:26.417 bw ( KiB/s): min= 96, max=11112, per=22.36%, avg=4574.40, stdev=4776.59, samples=5 00:18:26.417 iops : min= 24, max= 2778, avg=1143.60, stdev=1194.15, samples=5 00:18:26.417 lat (usec) : 250=3.59%, 500=93.73%, 750=1.01% 00:18:26.417 lat (msec) : 20=0.03%, 50=1.60% 00:18:26.417 cpu : usr=0.64%, sys=2.36%, ctx=2872, majf=0, minf=1 00:18:26.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.417 issued rwts: total=2871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.418 00:18:26.418 Run status group 0 (all jobs): 00:18:26.418 READ: bw=20.0MiB/s (20.9MB/s), 2395KiB/s-12.3MiB/s (2453kB/s-12.9MB/s), io=76.5MiB (80.3MB), run=2964-3831msec 00:18:26.418 00:18:26.418 Disk stats (read/write): 00:18:26.418 nvme0n1: ios=2121/0, merge=0/0, ticks=3322/0, in_queue=3322, util=95.97% 00:18:26.418 nvme0n2: ios=4455/0, merge=0/0, ticks=3426/0, in_queue=3426, util=95.07% 00:18:26.418 nvme0n3: ios=9829/0, merge=0/0, ticks=3418/0, in_queue=3418, util=99.28% 00:18:26.418 nvme0n4: ios=2908/0, merge=0/0, ticks=3064/0, in_queue=3064, util=99.36% 00:18:26.676 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.676 10:39:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:26.935 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.935 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:27.192 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.192 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:27.450 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.450 10:39:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:27.710 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:27.710 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3827108 00:18:27.710 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:27.710 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:27.969 nvmf hotplug test: fio failed as expected 00:18:27.969 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.227 rmmod nvme_tcp 00:18:28.227 rmmod nvme_fabrics 00:18:28.227 rmmod nvme_keyring 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3824938 ']' 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3824938 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3824938 ']' 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3824938 00:18:28.227 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3824938 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3824938' 00:18:28.228 killing process with pid 3824938 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3824938 00:18:28.228 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3824938 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.488 10:39:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.398 10:39:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:30.398 00:18:30.398 real 0m23.177s 00:18:30.398 user 1m21.921s 00:18:30.398 sys 0m6.849s 00:18:30.398 10:39:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:30.398 10:39:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.398 ************************************ 00:18:30.398 END TEST nvmf_fio_target 00:18:30.398 ************************************ 00:18:30.398 10:39:18 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:30.398 10:39:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:30.398 10:39:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.398 10:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:30.658 ************************************ 00:18:30.658 START TEST nvmf_bdevio 00:18:30.658 ************************************ 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:30.658 * Looking for test storage... 00:18:30.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.658 10:39:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:32.566 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:32.566 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:32.566 Found net devices under 0000:08:00.0: cvl_0_0 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.566 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:32.566 Found net devices under 0000:08:00.1: cvl_0_1 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:32.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:18:32.567 00:18:32.567 --- 10.0.0.2 ping statistics --- 00:18:32.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.567 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:32.567 00:18:32.567 --- 10.0.0.1 ping statistics --- 00:18:32.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.567 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3829290 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3829290 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3829290 ']' 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.567 10:39:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.567 [2024-07-23 10:39:20.845424] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:32.567 [2024-07-23 10:39:20.845526] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.567 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.567 [2024-07-23 10:39:20.911917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.567 [2024-07-23 10:39:21.004137] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.567 [2024-07-23 10:39:21.004197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.567 [2024-07-23 10:39:21.004212] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.567 [2024-07-23 10:39:21.004225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.567 [2024-07-23 10:39:21.004237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.567 [2024-07-23 10:39:21.004319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:32.567 [2024-07-23 10:39:21.004373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:32.567 [2024-07-23 10:39:21.004422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:32.567 [2024-07-23 10:39:21.004425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.827 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 [2024-07-23 10:39:21.154203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 Malloc0 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 [2024-07-23 10:39:21.204900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:32.828 { 00:18:32.828 "params": { 00:18:32.828 "name": "Nvme$subsystem", 00:18:32.828 "trtype": "$TEST_TRANSPORT", 00:18:32.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:32.828 "adrfam": "ipv4", 00:18:32.828 "trsvcid": "$NVMF_PORT", 00:18:32.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:32.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:32.828 "hdgst": ${hdgst:-false}, 00:18:32.828 "ddgst": ${ddgst:-false} 00:18:32.828 }, 00:18:32.828 "method": "bdev_nvme_attach_controller" 00:18:32.828 } 00:18:32.828 EOF 00:18:32.828 )") 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:32.828 10:39:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:32.828 "params": { 00:18:32.828 "name": "Nvme1", 00:18:32.828 "trtype": "tcp", 00:18:32.828 "traddr": "10.0.0.2", 00:18:32.828 "adrfam": "ipv4", 00:18:32.828 "trsvcid": "4420", 00:18:32.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.828 "hdgst": false, 00:18:32.828 "ddgst": false 00:18:32.828 }, 00:18:32.828 "method": "bdev_nvme_attach_controller" 00:18:32.828 }' 00:18:32.828 [2024-07-23 10:39:21.253675] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:32.828 [2024-07-23 10:39:21.253776] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829324 ] 00:18:32.828 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.828 [2024-07-23 10:39:21.315998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.118 [2024-07-23 10:39:21.411504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.118 [2024-07-23 10:39:21.411594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.118 [2024-07-23 10:39:21.411626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.118 I/O targets: 00:18:33.118 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:33.118 00:18:33.118 00:18:33.118 CUnit - A unit testing framework for C - Version 2.1-3 00:18:33.118 http://cunit.sourceforge.net/ 00:18:33.118 00:18:33.118 00:18:33.118 Suite: bdevio tests on: Nvme1n1 00:18:33.378 Test: blockdev write read block ...passed 00:18:33.378 Test: blockdev write zeroes read block ...passed 00:18:33.378 Test: blockdev write zeroes read no split ...passed 00:18:33.378 Test: blockdev write zeroes read split ...passed 00:18:33.378 Test: blockdev write zeroes read split partial ...passed 00:18:33.378 Test: blockdev reset ...[2024-07-23 10:39:21.784792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.378 [2024-07-23 10:39:21.784918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bb760 (9): Bad file descriptor 00:18:33.378 [2024-07-23 10:39:21.836777] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.378 passed 00:18:33.378 Test: blockdev write read 8 blocks ...passed 00:18:33.378 Test: blockdev write read size > 128k ...passed 00:18:33.378 Test: blockdev write read invalid size ...passed 00:18:33.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:33.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:33.378 Test: blockdev write read max offset ...passed 00:18:33.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:33.638 Test: blockdev writev readv 8 blocks ...passed 00:18:33.638 Test: blockdev writev readv 30 x 1block ...passed 00:18:33.638 Test: blockdev writev readv block ...passed 00:18:33.638 Test: blockdev writev readv size > 128k ...passed 00:18:33.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:33.638 Test: blockdev comparev and writev ...[2024-07-23 10:39:22.013282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.013322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.013368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.013714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.013740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.013764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.013782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.014110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.014135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.014159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.014186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.014526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.014553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.014577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:33.638 [2024-07-23 10:39:22.014594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:33.638 passed 00:18:33.638 Test: blockdev nvme passthru rw ...passed 00:18:33.638 Test: blockdev nvme passthru vendor specific ...[2024-07-23 10:39:22.098763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.638 [2024-07-23 10:39:22.098793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.098948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.638 [2024-07-23 10:39:22.098971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.099122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.638 [2024-07-23 10:39:22.099145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:33.638 [2024-07-23 10:39:22.099295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:33.638 [2024-07-23 10:39:22.099321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:33.638 passed 00:18:33.638 Test: blockdev nvme admin passthru ...passed 00:18:33.898 Test: blockdev copy ...passed 00:18:33.898 00:18:33.898 Run Summary: Type Total Ran Passed Failed Inactive 00:18:33.898 suites 1 1 n/a 0 0 00:18:33.898 tests 23 23 23 0 0 00:18:33.898 asserts 152 152 152 0 n/a 00:18:33.898 00:18:33.898 Elapsed time = 1.143 seconds 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.898 rmmod nvme_tcp 00:18:33.898 rmmod nvme_fabrics 00:18:33.898 rmmod nvme_keyring 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3829290 ']' 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3829290 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3829290 ']' 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3829290 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.898 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3829290 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3829290' 00:18:34.157 killing process with pid 3829290 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3829290 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3829290 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.157 10:39:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.693 10:39:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.693 00:18:36.693 real 0m5.733s 00:18:36.693 user 0m8.876s 00:18:36.693 sys 0m1.838s 00:18:36.693 10:39:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:36.693 10:39:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:36.693 ************************************ 00:18:36.693 END TEST nvmf_bdevio 00:18:36.693 ************************************ 00:18:36.693 10:39:24 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:36.693 10:39:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:36.693 10:39:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:36.693 10:39:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.693 ************************************ 00:18:36.693 START TEST nvmf_auth_target 00:18:36.693 ************************************ 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:36.693 * Looking for test storage... 00:18:36.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.693 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.694 10:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:38.075 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:38.075 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:38.075 Found net devices under 0000:08:00.0: cvl_0_0 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:38.075 Found net devices under 0000:08:00.1: cvl_0_1 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.075 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.076 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.076 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.076 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.076 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.076 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:38.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:38.335 00:18:38.335 --- 10.0.0.2 ping statistics --- 00:18:38.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.335 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:18:38.335 00:18:38.335 --- 10.0.0.1 ping statistics --- 00:18:38.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.335 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3830920 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3830920 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3830920 ']' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:38.335 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.593 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3830939 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=127b855dddd2214afab84990715a8a2f3d7ca55b8b932da9 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.f0A 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 127b855dddd2214afab84990715a8a2f3d7ca55b8b932da9 0 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 127b855dddd2214afab84990715a8a2f3d7ca55b8b932da9 0 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=127b855dddd2214afab84990715a8a2f3d7ca55b8b932da9 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.f0A 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.f0A 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.f0A 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:38.594 10:39:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d5093885f21832e8981c8cbe27537830d4bade7bff9aba1e9e8a21f4f4eca4d6 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.q9f 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d5093885f21832e8981c8cbe27537830d4bade7bff9aba1e9e8a21f4f4eca4d6 3 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d5093885f21832e8981c8cbe27537830d4bade7bff9aba1e9e8a21f4f4eca4d6 3 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d5093885f21832e8981c8cbe27537830d4bade7bff9aba1e9e8a21f4f4eca4d6 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.q9f 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.q9f 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.q9f 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40f93f577c7c634ada1d2176d425d3b0 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RP3 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40f93f577c7c634ada1d2176d425d3b0 1 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40f93f577c7c634ada1d2176d425d3b0 1 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40f93f577c7c634ada1d2176d425d3b0 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:38.594 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RP3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RP3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.RP3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c89e0e2351ed9872635b37acb3f56eae387db0aa0759a18 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4k9 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c89e0e2351ed9872635b37acb3f56eae387db0aa0759a18 2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c89e0e2351ed9872635b37acb3f56eae387db0aa0759a18 2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c89e0e2351ed9872635b37acb3f56eae387db0aa0759a18 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4k9 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4k9 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4k9 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a4bd682fdcbee1ce639fb16236906272063764ec46b2595 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oAS 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a4bd682fdcbee1ce639fb16236906272063764ec46b2595 2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a4bd682fdcbee1ce639fb16236906272063764ec46b2595 2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a4bd682fdcbee1ce639fb16236906272063764ec46b2595 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oAS 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oAS 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.oAS 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cc8bb96786926eb2fc0b02c591547cda 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DjB 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cc8bb96786926eb2fc0b02c591547cda 1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cc8bb96786926eb2fc0b02c591547cda 1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cc8bb96786926eb2fc0b02c591547cda 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DjB 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DjB 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.DjB 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=074fd503f10ed321a7f9d2be0303e1fa329ab42b55493801e973448532fe90cd 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gQK 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 074fd503f10ed321a7f9d2be0303e1fa329ab42b55493801e973448532fe90cd 3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 074fd503f10ed321a7f9d2be0303e1fa329ab42b55493801e973448532fe90cd 3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=074fd503f10ed321a7f9d2be0303e1fa329ab42b55493801e973448532fe90cd 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gQK 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gQK 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gQK 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3830920 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3830920 ']' 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:38.853 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3830939 /var/tmp/host.sock 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3830939 ']' 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:39.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.419 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f0A 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.f0A 00:18:39.677 10:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.f0A 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.q9f ]] 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9f 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9f 00:18:39.935 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9f 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RP3 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RP3 00:18:40.193 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RP3 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4k9 ]] 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4k9 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4k9 00:18:40.451 10:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4k9 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oAS 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oAS 00:18:40.709 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oAS 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.DjB ]] 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DjB 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DjB 00:18:40.967 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DjB 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gQK 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gQK 00:18:41.224 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gQK 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.482 10:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.739 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.997 00:18:41.997 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.997 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.997 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.268 { 00:18:42.268 "cntlid": 1, 00:18:42.268 "qid": 0, 00:18:42.268 "state": "enabled", 00:18:42.268 "listen_address": { 00:18:42.268 "trtype": "TCP", 00:18:42.268 "adrfam": "IPv4", 00:18:42.268 "traddr": "10.0.0.2", 00:18:42.268 "trsvcid": "4420" 00:18:42.268 }, 00:18:42.268 "peer_address": { 00:18:42.268 "trtype": "TCP", 00:18:42.268 "adrfam": "IPv4", 00:18:42.268 "traddr": "10.0.0.1", 00:18:42.268 "trsvcid": "57368" 00:18:42.268 }, 00:18:42.268 "auth": { 00:18:42.268 "state": "completed", 00:18:42.268 "digest": "sha256", 00:18:42.268 "dhgroup": "null" 00:18:42.268 } 00:18:42.268 } 00:18:42.268 ]' 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.268 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.567 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.567 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.567 10:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.831 10:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:18:43.764 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.021 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.279 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.536 00:18:44.536 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.536 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.536 10:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.793 { 00:18:44.793 "cntlid": 3, 00:18:44.793 "qid": 0, 00:18:44.793 "state": "enabled", 00:18:44.793 "listen_address": { 00:18:44.793 "trtype": "TCP", 00:18:44.793 "adrfam": "IPv4", 00:18:44.793 "traddr": "10.0.0.2", 00:18:44.793 "trsvcid": "4420" 00:18:44.793 }, 00:18:44.793 "peer_address": { 00:18:44.793 "trtype": "TCP", 00:18:44.793 "adrfam": "IPv4", 00:18:44.793 "traddr": "10.0.0.1", 00:18:44.793 "trsvcid": "57394" 00:18:44.793 }, 00:18:44.793 "auth": { 00:18:44.793 "state": "completed", 00:18:44.793 "digest": "sha256", 00:18:44.793 "dhgroup": "null" 00:18:44.793 } 00:18:44.793 } 00:18:44.793 ]' 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.793 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.050 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.050 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.050 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.050 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.050 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.307 10:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.681 10:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.681 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.939 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.197 10:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.455 { 00:18:47.455 "cntlid": 5, 00:18:47.455 "qid": 0, 00:18:47.455 "state": "enabled", 00:18:47.455 "listen_address": { 00:18:47.455 "trtype": "TCP", 00:18:47.455 "adrfam": "IPv4", 00:18:47.455 "traddr": "10.0.0.2", 00:18:47.455 "trsvcid": "4420" 00:18:47.455 }, 00:18:47.455 "peer_address": { 00:18:47.455 "trtype": "TCP", 00:18:47.455 "adrfam": "IPv4", 00:18:47.455 "traddr": "10.0.0.1", 00:18:47.455 "trsvcid": "44488" 00:18:47.455 }, 00:18:47.455 "auth": { 00:18:47.455 "state": "completed", 00:18:47.455 "digest": "sha256", 00:18:47.455 "dhgroup": "null" 00:18:47.455 } 00:18:47.455 } 00:18:47.455 ]' 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.455 10:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.714 10:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.088 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.089 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.347 00:18:49.605 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.605 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.605 10:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.605 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.605 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.605 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.605 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.863 { 00:18:49.863 "cntlid": 7, 00:18:49.863 "qid": 0, 00:18:49.863 "state": "enabled", 00:18:49.863 "listen_address": { 00:18:49.863 "trtype": "TCP", 00:18:49.863 "adrfam": "IPv4", 00:18:49.863 "traddr": "10.0.0.2", 00:18:49.863 "trsvcid": "4420" 00:18:49.863 }, 00:18:49.863 "peer_address": { 00:18:49.863 "trtype": "TCP", 00:18:49.863 "adrfam": "IPv4", 00:18:49.863 "traddr": "10.0.0.1", 00:18:49.863 "trsvcid": "44524" 00:18:49.863 }, 00:18:49.863 "auth": { 00:18:49.863 "state": "completed", 00:18:49.863 "digest": "sha256", 00:18:49.863 "dhgroup": "null" 00:18:49.863 } 00:18:49.863 } 00:18:49.863 ]' 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.863 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.121 10:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.497 10:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.064 00:18:52.064 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.064 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.064 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.322 { 00:18:52.322 "cntlid": 9, 00:18:52.322 "qid": 0, 00:18:52.322 "state": "enabled", 00:18:52.322 "listen_address": { 00:18:52.322 "trtype": "TCP", 00:18:52.322 "adrfam": "IPv4", 00:18:52.322 "traddr": "10.0.0.2", 00:18:52.322 "trsvcid": "4420" 00:18:52.322 }, 00:18:52.322 "peer_address": { 00:18:52.322 "trtype": "TCP", 00:18:52.322 "adrfam": "IPv4", 00:18:52.322 "traddr": "10.0.0.1", 00:18:52.322 "trsvcid": "44556" 00:18:52.322 }, 00:18:52.322 "auth": { 00:18:52.322 "state": "completed", 00:18:52.322 "digest": "sha256", 00:18:52.322 "dhgroup": "ffdhe2048" 00:18:52.322 } 00:18:52.322 } 00:18:52.322 ]' 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.322 10:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.888 10:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.822 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.080 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:54.080 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.080 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.080 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.081 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.646 00:18:54.646 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.646 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.646 10:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.904 { 00:18:54.904 "cntlid": 11, 00:18:54.904 "qid": 0, 00:18:54.904 "state": "enabled", 00:18:54.904 "listen_address": { 00:18:54.904 "trtype": "TCP", 00:18:54.904 "adrfam": "IPv4", 00:18:54.904 "traddr": "10.0.0.2", 00:18:54.904 "trsvcid": "4420" 00:18:54.904 }, 00:18:54.904 "peer_address": { 00:18:54.904 "trtype": "TCP", 00:18:54.904 "adrfam": "IPv4", 00:18:54.904 "traddr": "10.0.0.1", 00:18:54.904 "trsvcid": "44578" 00:18:54.904 }, 00:18:54.904 "auth": { 00:18:54.904 "state": "completed", 00:18:54.904 "digest": "sha256", 00:18:54.904 "dhgroup": "ffdhe2048" 00:18:54.904 } 00:18:54.904 } 00:18:54.904 ]' 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.904 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.470 10:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.404 10:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.662 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.228 00:18:57.228 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.228 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.228 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.486 { 00:18:57.486 "cntlid": 13, 00:18:57.486 "qid": 0, 00:18:57.486 "state": "enabled", 00:18:57.486 "listen_address": { 00:18:57.486 "trtype": "TCP", 00:18:57.486 "adrfam": "IPv4", 00:18:57.486 "traddr": "10.0.0.2", 00:18:57.486 "trsvcid": "4420" 00:18:57.486 }, 00:18:57.486 "peer_address": { 00:18:57.486 "trtype": "TCP", 00:18:57.486 "adrfam": "IPv4", 00:18:57.486 "traddr": "10.0.0.1", 00:18:57.486 "trsvcid": "43136" 00:18:57.486 }, 00:18:57.486 "auth": { 00:18:57.486 "state": "completed", 00:18:57.486 "digest": "sha256", 00:18:57.486 "dhgroup": "ffdhe2048" 00:18:57.486 } 00:18:57.486 } 00:18:57.486 ]' 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.486 10:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.052 10:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.986 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.244 10:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.810 00:18:59.810 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.810 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.810 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.068 { 00:19:00.068 "cntlid": 15, 00:19:00.068 "qid": 0, 00:19:00.068 "state": "enabled", 00:19:00.068 "listen_address": { 00:19:00.068 "trtype": "TCP", 00:19:00.068 "adrfam": "IPv4", 00:19:00.068 "traddr": "10.0.0.2", 00:19:00.068 "trsvcid": "4420" 00:19:00.068 }, 00:19:00.068 "peer_address": { 00:19:00.068 "trtype": "TCP", 00:19:00.068 "adrfam": "IPv4", 00:19:00.068 "traddr": "10.0.0.1", 00:19:00.068 "trsvcid": "43162" 00:19:00.068 }, 00:19:00.068 "auth": { 00:19:00.068 "state": "completed", 00:19:00.068 "digest": "sha256", 00:19:00.068 "dhgroup": "ffdhe2048" 00:19:00.068 } 00:19:00.068 } 00:19:00.068 ]' 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.068 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.326 10:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.701 10:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.959 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.217 00:19:02.217 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.217 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.217 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.476 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.476 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.476 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.476 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.734 10:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.734 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.734 { 00:19:02.734 "cntlid": 17, 00:19:02.734 "qid": 0, 00:19:02.734 "state": "enabled", 00:19:02.734 "listen_address": { 00:19:02.734 "trtype": "TCP", 00:19:02.734 "adrfam": "IPv4", 00:19:02.734 "traddr": "10.0.0.2", 00:19:02.734 "trsvcid": "4420" 00:19:02.734 }, 00:19:02.734 "peer_address": { 00:19:02.734 "trtype": "TCP", 00:19:02.734 "adrfam": "IPv4", 00:19:02.734 "traddr": "10.0.0.1", 00:19:02.734 "trsvcid": "43180" 00:19:02.734 }, 00:19:02.734 "auth": { 00:19:02.734 "state": "completed", 00:19:02.734 "digest": "sha256", 00:19:02.734 "dhgroup": "ffdhe3072" 00:19:02.734 } 00:19:02.734 } 00:19:02.734 ]' 00:19:02.734 10:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.734 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.992 10:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.366 10:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.945 00:19:04.945 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.945 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.945 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.206 { 00:19:05.206 "cntlid": 19, 00:19:05.206 "qid": 0, 00:19:05.206 "state": "enabled", 00:19:05.206 "listen_address": { 00:19:05.206 "trtype": "TCP", 00:19:05.206 "adrfam": "IPv4", 00:19:05.206 "traddr": "10.0.0.2", 00:19:05.206 "trsvcid": "4420" 00:19:05.206 }, 00:19:05.206 "peer_address": { 00:19:05.206 "trtype": "TCP", 00:19:05.206 "adrfam": "IPv4", 00:19:05.206 "traddr": "10.0.0.1", 00:19:05.206 "trsvcid": "43212" 00:19:05.206 }, 00:19:05.206 "auth": { 00:19:05.206 "state": "completed", 00:19:05.206 "digest": "sha256", 00:19:05.206 "dhgroup": "ffdhe3072" 00:19:05.206 } 00:19:05.206 } 00:19:05.206 ]' 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.206 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.772 10:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.706 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.965 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.531 00:19:07.531 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.531 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.531 10:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.789 { 00:19:07.789 "cntlid": 21, 00:19:07.789 "qid": 0, 00:19:07.789 "state": "enabled", 00:19:07.789 "listen_address": { 00:19:07.789 "trtype": "TCP", 00:19:07.789 "adrfam": "IPv4", 00:19:07.789 "traddr": "10.0.0.2", 00:19:07.789 "trsvcid": "4420" 00:19:07.789 }, 00:19:07.789 "peer_address": { 00:19:07.789 "trtype": "TCP", 00:19:07.789 "adrfam": "IPv4", 00:19:07.789 "traddr": "10.0.0.1", 00:19:07.789 "trsvcid": "50418" 00:19:07.789 }, 00:19:07.789 "auth": { 00:19:07.789 "state": "completed", 00:19:07.789 "digest": "sha256", 00:19:07.789 "dhgroup": "ffdhe3072" 00:19:07.789 } 00:19:07.789 } 00:19:07.789 ]' 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.789 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.047 10:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:09.472 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.730 10:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.988 00:19:09.988 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.988 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.988 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.247 { 00:19:10.247 "cntlid": 23, 00:19:10.247 "qid": 0, 00:19:10.247 "state": "enabled", 00:19:10.247 "listen_address": { 00:19:10.247 "trtype": "TCP", 00:19:10.247 "adrfam": "IPv4", 00:19:10.247 "traddr": "10.0.0.2", 00:19:10.247 "trsvcid": "4420" 00:19:10.247 }, 00:19:10.247 "peer_address": { 00:19:10.247 "trtype": "TCP", 00:19:10.247 "adrfam": "IPv4", 00:19:10.247 "traddr": "10.0.0.1", 00:19:10.247 "trsvcid": "50444" 00:19:10.247 }, 00:19:10.247 "auth": { 00:19:10.247 "state": "completed", 00:19:10.247 "digest": "sha256", 00:19:10.247 "dhgroup": "ffdhe3072" 00:19:10.247 } 00:19:10.247 } 00:19:10.247 ]' 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.247 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.505 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.505 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.505 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.505 10:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.880 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.139 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.705 00:19:12.705 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.705 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.705 10:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.963 { 00:19:12.963 "cntlid": 25, 00:19:12.963 "qid": 0, 00:19:12.963 "state": "enabled", 00:19:12.963 "listen_address": { 00:19:12.963 "trtype": "TCP", 00:19:12.963 "adrfam": "IPv4", 00:19:12.963 "traddr": "10.0.0.2", 00:19:12.963 "trsvcid": "4420" 00:19:12.963 }, 00:19:12.963 "peer_address": { 00:19:12.963 "trtype": "TCP", 00:19:12.963 "adrfam": "IPv4", 00:19:12.963 "traddr": "10.0.0.1", 00:19:12.963 "trsvcid": "50468" 00:19:12.963 }, 00:19:12.963 "auth": { 00:19:12.963 "state": "completed", 00:19:12.963 "digest": "sha256", 00:19:12.963 "dhgroup": "ffdhe4096" 00:19:12.963 } 00:19:12.963 } 00:19:12.963 ]' 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.963 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.221 10:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.595 10:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.853 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.111 00:19:15.111 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.111 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.111 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.369 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.369 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.369 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.369 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.627 { 00:19:15.627 "cntlid": 27, 00:19:15.627 "qid": 0, 00:19:15.627 "state": "enabled", 00:19:15.627 "listen_address": { 00:19:15.627 "trtype": "TCP", 00:19:15.627 "adrfam": "IPv4", 00:19:15.627 "traddr": "10.0.0.2", 00:19:15.627 "trsvcid": "4420" 00:19:15.627 }, 00:19:15.627 "peer_address": { 00:19:15.627 "trtype": "TCP", 00:19:15.627 "adrfam": "IPv4", 00:19:15.627 "traddr": "10.0.0.1", 00:19:15.627 "trsvcid": "50514" 00:19:15.627 }, 00:19:15.627 "auth": { 00:19:15.627 "state": "completed", 00:19:15.627 "digest": "sha256", 00:19:15.627 "dhgroup": "ffdhe4096" 00:19:15.627 } 00:19:15.627 } 00:19:15.627 ]' 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.627 10:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.885 10:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.257 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.514 10:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.514 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.514 10:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.772 00:19:17.772 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.772 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.772 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.030 { 00:19:18.030 "cntlid": 29, 00:19:18.030 "qid": 0, 00:19:18.030 "state": "enabled", 00:19:18.030 "listen_address": { 00:19:18.030 "trtype": "TCP", 00:19:18.030 "adrfam": "IPv4", 00:19:18.030 "traddr": "10.0.0.2", 00:19:18.030 "trsvcid": "4420" 00:19:18.030 }, 00:19:18.030 "peer_address": { 00:19:18.030 "trtype": "TCP", 00:19:18.030 "adrfam": "IPv4", 00:19:18.030 "traddr": "10.0.0.1", 00:19:18.030 "trsvcid": "51166" 00:19:18.030 }, 00:19:18.030 "auth": { 00:19:18.030 "state": "completed", 00:19:18.030 "digest": "sha256", 00:19:18.030 "dhgroup": "ffdhe4096" 00:19:18.030 } 00:19:18.030 } 00:19:18.030 ]' 00:19:18.030 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.288 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.546 10:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:19:19.919 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.920 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.177 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.435 00:19:20.435 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.435 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.435 10:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.692 { 00:19:20.692 "cntlid": 31, 00:19:20.692 "qid": 0, 00:19:20.692 "state": "enabled", 00:19:20.692 "listen_address": { 00:19:20.692 "trtype": "TCP", 00:19:20.692 "adrfam": "IPv4", 00:19:20.692 "traddr": "10.0.0.2", 00:19:20.692 "trsvcid": "4420" 00:19:20.692 }, 00:19:20.692 "peer_address": { 00:19:20.692 "trtype": "TCP", 00:19:20.692 "adrfam": "IPv4", 00:19:20.692 "traddr": "10.0.0.1", 00:19:20.692 "trsvcid": "51192" 00:19:20.692 }, 00:19:20.692 "auth": { 00:19:20.692 "state": "completed", 00:19:20.692 "digest": "sha256", 00:19:20.692 "dhgroup": "ffdhe4096" 00:19:20.692 } 00:19:20.692 } 00:19:20.692 ]' 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.692 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.950 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.950 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.950 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.950 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.950 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.207 10:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.580 10:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.580 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.838 10:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.838 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.838 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.404 00:19:23.404 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.404 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.404 10:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.662 { 00:19:23.662 "cntlid": 33, 00:19:23.662 "qid": 0, 00:19:23.662 "state": "enabled", 00:19:23.662 "listen_address": { 00:19:23.662 "trtype": "TCP", 00:19:23.662 "adrfam": "IPv4", 00:19:23.662 "traddr": "10.0.0.2", 00:19:23.662 "trsvcid": "4420" 00:19:23.662 }, 00:19:23.662 "peer_address": { 00:19:23.662 "trtype": "TCP", 00:19:23.662 "adrfam": "IPv4", 00:19:23.662 "traddr": "10.0.0.1", 00:19:23.662 "trsvcid": "51220" 00:19:23.662 }, 00:19:23.662 "auth": { 00:19:23.662 "state": "completed", 00:19:23.662 "digest": "sha256", 00:19:23.662 "dhgroup": "ffdhe6144" 00:19:23.662 } 00:19:23.662 } 00:19:23.662 ]' 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.662 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.228 10:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:25.161 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.161 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:25.161 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.162 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.419 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.419 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.419 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.419 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.678 10:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.244 00:19:26.244 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.244 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.244 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.502 { 00:19:26.502 "cntlid": 35, 00:19:26.502 "qid": 0, 00:19:26.502 "state": "enabled", 00:19:26.502 "listen_address": { 00:19:26.502 "trtype": "TCP", 00:19:26.502 "adrfam": "IPv4", 00:19:26.502 "traddr": "10.0.0.2", 00:19:26.502 "trsvcid": "4420" 00:19:26.502 }, 00:19:26.502 "peer_address": { 00:19:26.502 "trtype": "TCP", 00:19:26.502 "adrfam": "IPv4", 00:19:26.502 "traddr": "10.0.0.1", 00:19:26.502 "trsvcid": "51256" 00:19:26.502 }, 00:19:26.502 "auth": { 00:19:26.502 "state": "completed", 00:19:26.502 "digest": "sha256", 00:19:26.502 "dhgroup": "ffdhe6144" 00:19:26.502 } 00:19:26.502 } 00:19:26.502 ]' 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.502 10:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.502 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.759 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.759 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.759 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.759 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.015 10:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.388 10:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.322 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.322 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.322 { 00:19:29.322 "cntlid": 37, 00:19:29.322 "qid": 0, 00:19:29.322 "state": "enabled", 00:19:29.322 "listen_address": { 00:19:29.322 "trtype": "TCP", 00:19:29.322 "adrfam": "IPv4", 00:19:29.323 "traddr": "10.0.0.2", 00:19:29.323 "trsvcid": "4420" 00:19:29.323 }, 00:19:29.323 "peer_address": { 00:19:29.323 "trtype": "TCP", 00:19:29.323 "adrfam": "IPv4", 00:19:29.323 "traddr": "10.0.0.1", 00:19:29.323 "trsvcid": "54928" 00:19:29.323 }, 00:19:29.323 "auth": { 00:19:29.323 "state": "completed", 00:19:29.323 "digest": "sha256", 00:19:29.323 "dhgroup": "ffdhe6144" 00:19:29.323 } 00:19:29.323 } 00:19:29.323 ]' 00:19:29.323 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.580 10:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.839 10:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.212 10:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.197 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.197 { 00:19:32.197 "cntlid": 39, 00:19:32.197 "qid": 0, 00:19:32.197 "state": "enabled", 00:19:32.197 "listen_address": { 00:19:32.197 "trtype": "TCP", 00:19:32.197 "adrfam": "IPv4", 00:19:32.197 "traddr": "10.0.0.2", 00:19:32.197 "trsvcid": "4420" 00:19:32.197 }, 00:19:32.197 "peer_address": { 00:19:32.197 "trtype": "TCP", 00:19:32.197 "adrfam": "IPv4", 00:19:32.197 "traddr": "10.0.0.1", 00:19:32.197 "trsvcid": "54952" 00:19:32.197 }, 00:19:32.197 "auth": { 00:19:32.197 "state": "completed", 00:19:32.197 "digest": "sha256", 00:19:32.197 "dhgroup": "ffdhe6144" 00:19:32.197 } 00:19:32.197 } 00:19:32.197 ]' 00:19:32.197 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.455 10:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.713 10:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.086 10:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.493 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.493 { 00:19:35.493 "cntlid": 41, 00:19:35.493 "qid": 0, 00:19:35.493 "state": "enabled", 00:19:35.493 "listen_address": { 00:19:35.493 "trtype": "TCP", 00:19:35.493 "adrfam": "IPv4", 00:19:35.493 "traddr": "10.0.0.2", 00:19:35.493 "trsvcid": "4420" 00:19:35.493 }, 00:19:35.493 "peer_address": { 00:19:35.493 "trtype": "TCP", 00:19:35.493 "adrfam": "IPv4", 00:19:35.493 "traddr": "10.0.0.1", 00:19:35.493 "trsvcid": "54968" 00:19:35.493 }, 00:19:35.493 "auth": { 00:19:35.493 "state": "completed", 00:19:35.493 "digest": "sha256", 00:19:35.493 "dhgroup": "ffdhe8192" 00:19:35.493 } 00:19:35.493 } 00:19:35.493 ]' 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.493 10:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.751 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.751 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.751 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.009 10:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.382 10:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.316 00:19:38.316 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.316 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.316 10:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.575 { 00:19:38.575 "cntlid": 43, 00:19:38.575 "qid": 0, 00:19:38.575 "state": "enabled", 00:19:38.575 "listen_address": { 00:19:38.575 "trtype": "TCP", 00:19:38.575 "adrfam": "IPv4", 00:19:38.575 "traddr": "10.0.0.2", 00:19:38.575 "trsvcid": "4420" 00:19:38.575 }, 00:19:38.575 "peer_address": { 00:19:38.575 "trtype": "TCP", 00:19:38.575 "adrfam": "IPv4", 00:19:38.575 "traddr": "10.0.0.1", 00:19:38.575 "trsvcid": "55054" 00:19:38.575 }, 00:19:38.575 "auth": { 00:19:38.575 "state": "completed", 00:19:38.575 "digest": "sha256", 00:19:38.575 "dhgroup": "ffdhe8192" 00:19:38.575 } 00:19:38.575 } 00:19:38.575 ]' 00:19:38.575 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.833 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.091 10:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.464 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.465 10:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.839 00:19:41.839 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.839 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.839 10:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.839 { 00:19:41.839 "cntlid": 45, 00:19:41.839 "qid": 0, 00:19:41.839 "state": "enabled", 00:19:41.839 "listen_address": { 00:19:41.839 "trtype": "TCP", 00:19:41.839 "adrfam": "IPv4", 00:19:41.839 "traddr": "10.0.0.2", 00:19:41.839 "trsvcid": "4420" 00:19:41.839 }, 00:19:41.839 "peer_address": { 00:19:41.839 "trtype": "TCP", 00:19:41.839 "adrfam": "IPv4", 00:19:41.839 "traddr": "10.0.0.1", 00:19:41.839 "trsvcid": "55094" 00:19:41.839 }, 00:19:41.839 "auth": { 00:19:41.839 "state": "completed", 00:19:41.839 "digest": "sha256", 00:19:41.839 "dhgroup": "ffdhe8192" 00:19:41.839 } 00:19:41.839 } 00:19:41.839 ]' 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.839 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.098 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.356 10:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.730 10:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.988 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:43.988 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.988 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.988 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.989 10:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.923 00:19:44.923 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.923 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.923 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.182 { 00:19:45.182 "cntlid": 47, 00:19:45.182 "qid": 0, 00:19:45.182 "state": "enabled", 00:19:45.182 "listen_address": { 00:19:45.182 "trtype": "TCP", 00:19:45.182 "adrfam": "IPv4", 00:19:45.182 "traddr": "10.0.0.2", 00:19:45.182 "trsvcid": "4420" 00:19:45.182 }, 00:19:45.182 "peer_address": { 00:19:45.182 "trtype": "TCP", 00:19:45.182 "adrfam": "IPv4", 00:19:45.182 "traddr": "10.0.0.1", 00:19:45.182 "trsvcid": "55122" 00:19:45.182 }, 00:19:45.182 "auth": { 00:19:45.182 "state": "completed", 00:19:45.182 "digest": "sha256", 00:19:45.182 "dhgroup": "ffdhe8192" 00:19:45.182 } 00:19:45.182 } 00:19:45.182 ]' 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.182 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.440 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.440 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.440 10:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.697 10:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.070 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.329 00:19:47.587 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.587 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.587 10:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.845 { 00:19:47.845 "cntlid": 49, 00:19:47.845 "qid": 0, 00:19:47.845 "state": "enabled", 00:19:47.845 "listen_address": { 00:19:47.845 "trtype": "TCP", 00:19:47.845 "adrfam": "IPv4", 00:19:47.845 "traddr": "10.0.0.2", 00:19:47.845 "trsvcid": "4420" 00:19:47.845 }, 00:19:47.845 "peer_address": { 00:19:47.845 "trtype": "TCP", 00:19:47.845 "adrfam": "IPv4", 00:19:47.845 "traddr": "10.0.0.1", 00:19:47.845 "trsvcid": "37418" 00:19:47.845 }, 00:19:47.845 "auth": { 00:19:47.845 "state": "completed", 00:19:47.845 "digest": "sha384", 00:19:47.845 "dhgroup": "null" 00:19:47.845 } 00:19:47.845 } 00:19:47.845 ]' 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.845 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.103 10:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.477 10:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.735 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.993 00:19:49.993 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.993 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.993 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.251 { 00:19:50.251 "cntlid": 51, 00:19:50.251 "qid": 0, 00:19:50.251 "state": "enabled", 00:19:50.251 "listen_address": { 00:19:50.251 "trtype": "TCP", 00:19:50.251 "adrfam": "IPv4", 00:19:50.251 "traddr": "10.0.0.2", 00:19:50.251 "trsvcid": "4420" 00:19:50.251 }, 00:19:50.251 "peer_address": { 00:19:50.251 "trtype": "TCP", 00:19:50.251 "adrfam": "IPv4", 00:19:50.251 "traddr": "10.0.0.1", 00:19:50.251 "trsvcid": "37438" 00:19:50.251 }, 00:19:50.251 "auth": { 00:19:50.251 "state": "completed", 00:19:50.251 "digest": "sha384", 00:19:50.251 "dhgroup": "null" 00:19:50.251 } 00:19:50.251 } 00:19:50.251 ]' 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.251 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.509 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.509 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.509 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.509 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.509 10:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.767 10:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.140 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.706 00:19:52.706 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.706 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.706 10:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.964 { 00:19:52.964 "cntlid": 53, 00:19:52.964 "qid": 0, 00:19:52.964 "state": "enabled", 00:19:52.964 "listen_address": { 00:19:52.964 "trtype": "TCP", 00:19:52.964 "adrfam": "IPv4", 00:19:52.964 "traddr": "10.0.0.2", 00:19:52.964 "trsvcid": "4420" 00:19:52.964 }, 00:19:52.964 "peer_address": { 00:19:52.964 "trtype": "TCP", 00:19:52.964 "adrfam": "IPv4", 00:19:52.964 "traddr": "10.0.0.1", 00:19:52.964 "trsvcid": "37466" 00:19:52.964 }, 00:19:52.964 "auth": { 00:19:52.964 "state": "completed", 00:19:52.964 "digest": "sha384", 00:19:52.964 "dhgroup": "null" 00:19:52.964 } 00:19:52.964 } 00:19:52.964 ]' 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.964 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.222 10:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.619 10:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.877 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.135 00:19:55.136 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.136 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.136 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.394 { 00:19:55.394 "cntlid": 55, 00:19:55.394 "qid": 0, 00:19:55.394 "state": "enabled", 00:19:55.394 "listen_address": { 00:19:55.394 "trtype": "TCP", 00:19:55.394 "adrfam": "IPv4", 00:19:55.394 "traddr": "10.0.0.2", 00:19:55.394 "trsvcid": "4420" 00:19:55.394 }, 00:19:55.394 "peer_address": { 00:19:55.394 "trtype": "TCP", 00:19:55.394 "adrfam": "IPv4", 00:19:55.394 "traddr": "10.0.0.1", 00:19:55.394 "trsvcid": "37486" 00:19:55.394 }, 00:19:55.394 "auth": { 00:19:55.394 "state": "completed", 00:19:55.394 "digest": "sha384", 00:19:55.394 "dhgroup": "null" 00:19:55.394 } 00:19:55.394 } 00:19:55.394 ]' 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.394 10:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.652 10:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.024 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.282 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.283 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.540 00:19:57.540 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.540 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.540 10:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.799 { 00:19:57.799 "cntlid": 57, 00:19:57.799 "qid": 0, 00:19:57.799 "state": "enabled", 00:19:57.799 "listen_address": { 00:19:57.799 "trtype": "TCP", 00:19:57.799 "adrfam": "IPv4", 00:19:57.799 "traddr": "10.0.0.2", 00:19:57.799 "trsvcid": "4420" 00:19:57.799 }, 00:19:57.799 "peer_address": { 00:19:57.799 "trtype": "TCP", 00:19:57.799 "adrfam": "IPv4", 00:19:57.799 "traddr": "10.0.0.1", 00:19:57.799 "trsvcid": "55102" 00:19:57.799 }, 00:19:57.799 "auth": { 00:19:57.799 "state": "completed", 00:19:57.799 "digest": "sha384", 00:19:57.799 "dhgroup": "ffdhe2048" 00:19:57.799 } 00:19:57.799 } 00:19:57.799 ]' 00:19:57.799 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.057 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.314 10:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.688 10:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.946 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.204 00:20:00.204 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.204 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.204 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.462 { 00:20:00.462 "cntlid": 59, 00:20:00.462 "qid": 0, 00:20:00.462 "state": "enabled", 00:20:00.462 "listen_address": { 00:20:00.462 "trtype": "TCP", 00:20:00.462 "adrfam": "IPv4", 00:20:00.462 "traddr": "10.0.0.2", 00:20:00.462 "trsvcid": "4420" 00:20:00.462 }, 00:20:00.462 "peer_address": { 00:20:00.462 "trtype": "TCP", 00:20:00.462 "adrfam": "IPv4", 00:20:00.462 "traddr": "10.0.0.1", 00:20:00.462 "trsvcid": "55124" 00:20:00.462 }, 00:20:00.462 "auth": { 00:20:00.462 "state": "completed", 00:20:00.462 "digest": "sha384", 00:20:00.462 "dhgroup": "ffdhe2048" 00:20:00.462 } 00:20:00.462 } 00:20:00.462 ]' 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.462 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.719 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.719 10:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.719 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.719 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.719 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.977 10:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.350 10:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.940 00:20:02.940 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.940 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.940 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.940 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.940 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.202 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.202 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.202 10:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.202 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.202 { 00:20:03.202 "cntlid": 61, 00:20:03.202 "qid": 0, 00:20:03.202 "state": "enabled", 00:20:03.202 "listen_address": { 00:20:03.202 "trtype": "TCP", 00:20:03.202 "adrfam": "IPv4", 00:20:03.202 "traddr": "10.0.0.2", 00:20:03.202 "trsvcid": "4420" 00:20:03.202 }, 00:20:03.202 "peer_address": { 00:20:03.202 "trtype": "TCP", 00:20:03.202 "adrfam": "IPv4", 00:20:03.202 "traddr": "10.0.0.1", 00:20:03.202 "trsvcid": "55136" 00:20:03.202 }, 00:20:03.202 "auth": { 00:20:03.203 "state": "completed", 00:20:03.203 "digest": "sha384", 00:20:03.203 "dhgroup": "ffdhe2048" 00:20:03.203 } 00:20:03.203 } 00:20:03.203 ]' 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.203 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.461 10:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:20:04.835 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.835 10:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:04.835 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.835 10:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.835 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.400 00:20:05.400 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.400 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.400 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.658 { 00:20:05.658 "cntlid": 63, 00:20:05.658 "qid": 0, 00:20:05.658 "state": "enabled", 00:20:05.658 "listen_address": { 00:20:05.658 "trtype": "TCP", 00:20:05.658 "adrfam": "IPv4", 00:20:05.658 "traddr": "10.0.0.2", 00:20:05.658 "trsvcid": "4420" 00:20:05.658 }, 00:20:05.658 "peer_address": { 00:20:05.658 "trtype": "TCP", 00:20:05.658 "adrfam": "IPv4", 00:20:05.658 "traddr": "10.0.0.1", 00:20:05.658 "trsvcid": "55160" 00:20:05.658 }, 00:20:05.658 "auth": { 00:20:05.658 "state": "completed", 00:20:05.658 "digest": "sha384", 00:20:05.658 "dhgroup": "ffdhe2048" 00:20:05.658 } 00:20:05.658 } 00:20:05.658 ]' 00:20:05.658 10:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.658 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.228 10:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.165 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.423 10:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.991 00:20:07.991 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.991 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.991 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.249 { 00:20:08.249 "cntlid": 65, 00:20:08.249 "qid": 0, 00:20:08.249 "state": "enabled", 00:20:08.249 "listen_address": { 00:20:08.249 "trtype": "TCP", 00:20:08.249 "adrfam": "IPv4", 00:20:08.249 "traddr": "10.0.0.2", 00:20:08.249 "trsvcid": "4420" 00:20:08.249 }, 00:20:08.249 "peer_address": { 00:20:08.249 "trtype": "TCP", 00:20:08.249 "adrfam": "IPv4", 00:20:08.249 "traddr": "10.0.0.1", 00:20:08.249 "trsvcid": "37532" 00:20:08.249 }, 00:20:08.249 "auth": { 00:20:08.249 "state": "completed", 00:20:08.249 "digest": "sha384", 00:20:08.249 "dhgroup": "ffdhe3072" 00:20:08.249 } 00:20:08.249 } 00:20:08.249 ]' 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.249 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.250 10:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.820 10:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.759 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.164 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.423 00:20:10.681 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.681 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.681 10:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.939 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.939 { 00:20:10.939 "cntlid": 67, 00:20:10.939 "qid": 0, 00:20:10.939 "state": "enabled", 00:20:10.939 "listen_address": { 00:20:10.939 "trtype": "TCP", 00:20:10.939 "adrfam": "IPv4", 00:20:10.939 "traddr": "10.0.0.2", 00:20:10.940 "trsvcid": "4420" 00:20:10.940 }, 00:20:10.940 "peer_address": { 00:20:10.940 "trtype": "TCP", 00:20:10.940 "adrfam": "IPv4", 00:20:10.940 "traddr": "10.0.0.1", 00:20:10.940 "trsvcid": "37572" 00:20:10.940 }, 00:20:10.940 "auth": { 00:20:10.940 "state": "completed", 00:20:10.940 "digest": "sha384", 00:20:10.940 "dhgroup": "ffdhe3072" 00:20:10.940 } 00:20:10.940 } 00:20:10.940 ]' 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.940 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.199 10:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.576 10:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.835 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.093 00:20:13.093 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.093 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.093 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.351 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.351 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.351 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.351 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.609 { 00:20:13.609 "cntlid": 69, 00:20:13.609 "qid": 0, 00:20:13.609 "state": "enabled", 00:20:13.609 "listen_address": { 00:20:13.609 "trtype": "TCP", 00:20:13.609 "adrfam": "IPv4", 00:20:13.609 "traddr": "10.0.0.2", 00:20:13.609 "trsvcid": "4420" 00:20:13.609 }, 00:20:13.609 "peer_address": { 00:20:13.609 "trtype": "TCP", 00:20:13.609 "adrfam": "IPv4", 00:20:13.609 "traddr": "10.0.0.1", 00:20:13.609 "trsvcid": "37596" 00:20:13.609 }, 00:20:13.609 "auth": { 00:20:13.609 "state": "completed", 00:20:13.609 "digest": "sha384", 00:20:13.609 "dhgroup": "ffdhe3072" 00:20:13.609 } 00:20:13.609 } 00:20:13.609 ]' 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.609 10:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.867 10:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.243 10:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.807 00:20:15.807 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.807 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.807 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.065 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.065 { 00:20:16.065 "cntlid": 71, 00:20:16.065 "qid": 0, 00:20:16.065 "state": "enabled", 00:20:16.065 "listen_address": { 00:20:16.065 "trtype": "TCP", 00:20:16.066 "adrfam": "IPv4", 00:20:16.066 "traddr": "10.0.0.2", 00:20:16.066 "trsvcid": "4420" 00:20:16.066 }, 00:20:16.066 "peer_address": { 00:20:16.066 "trtype": "TCP", 00:20:16.066 "adrfam": "IPv4", 00:20:16.066 "traddr": "10.0.0.1", 00:20:16.066 "trsvcid": "37618" 00:20:16.066 }, 00:20:16.066 "auth": { 00:20:16.066 "state": "completed", 00:20:16.066 "digest": "sha384", 00:20:16.066 "dhgroup": "ffdhe3072" 00:20:16.066 } 00:20:16.066 } 00:20:16.066 ]' 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.066 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.324 10:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.697 10:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.955 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.521 00:20:18.521 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.521 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.521 10:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.779 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.780 { 00:20:18.780 "cntlid": 73, 00:20:18.780 "qid": 0, 00:20:18.780 "state": "enabled", 00:20:18.780 "listen_address": { 00:20:18.780 "trtype": "TCP", 00:20:18.780 "adrfam": "IPv4", 00:20:18.780 "traddr": "10.0.0.2", 00:20:18.780 "trsvcid": "4420" 00:20:18.780 }, 00:20:18.780 "peer_address": { 00:20:18.780 "trtype": "TCP", 00:20:18.780 "adrfam": "IPv4", 00:20:18.780 "traddr": "10.0.0.1", 00:20:18.780 "trsvcid": "50990" 00:20:18.780 }, 00:20:18.780 "auth": { 00:20:18.780 "state": "completed", 00:20:18.780 "digest": "sha384", 00:20:18.780 "dhgroup": "ffdhe4096" 00:20:18.780 } 00:20:18.780 } 00:20:18.780 ]' 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.780 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.038 10:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.412 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.413 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.670 10:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.928 00:20:20.928 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.928 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.928 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.187 { 00:20:21.187 "cntlid": 75, 00:20:21.187 "qid": 0, 00:20:21.187 "state": "enabled", 00:20:21.187 "listen_address": { 00:20:21.187 "trtype": "TCP", 00:20:21.187 "adrfam": "IPv4", 00:20:21.187 "traddr": "10.0.0.2", 00:20:21.187 "trsvcid": "4420" 00:20:21.187 }, 00:20:21.187 "peer_address": { 00:20:21.187 "trtype": "TCP", 00:20:21.187 "adrfam": "IPv4", 00:20:21.187 "traddr": "10.0.0.1", 00:20:21.187 "trsvcid": "51014" 00:20:21.187 }, 00:20:21.187 "auth": { 00:20:21.187 "state": "completed", 00:20:21.187 "digest": "sha384", 00:20:21.187 "dhgroup": "ffdhe4096" 00:20:21.187 } 00:20:21.187 } 00:20:21.187 ]' 00:20:21.187 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.446 10:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.705 10:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.080 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.081 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.647 00:20:23.647 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.647 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.647 10:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.906 { 00:20:23.906 "cntlid": 77, 00:20:23.906 "qid": 0, 00:20:23.906 "state": "enabled", 00:20:23.906 "listen_address": { 00:20:23.906 "trtype": "TCP", 00:20:23.906 "adrfam": "IPv4", 00:20:23.906 "traddr": "10.0.0.2", 00:20:23.906 "trsvcid": "4420" 00:20:23.906 }, 00:20:23.906 "peer_address": { 00:20:23.906 "trtype": "TCP", 00:20:23.906 "adrfam": "IPv4", 00:20:23.906 "traddr": "10.0.0.1", 00:20:23.906 "trsvcid": "51040" 00:20:23.906 }, 00:20:23.906 "auth": { 00:20:23.906 "state": "completed", 00:20:23.906 "digest": "sha384", 00:20:23.906 "dhgroup": "ffdhe4096" 00:20:23.906 } 00:20:23.906 } 00:20:23.906 ]' 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.906 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.165 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.165 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.165 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.424 10:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.814 10:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.814 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.380 00:20:26.380 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.380 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.380 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.638 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.638 { 00:20:26.638 "cntlid": 79, 00:20:26.638 "qid": 0, 00:20:26.638 "state": "enabled", 00:20:26.638 "listen_address": { 00:20:26.638 "trtype": "TCP", 00:20:26.638 "adrfam": "IPv4", 00:20:26.638 "traddr": "10.0.0.2", 00:20:26.638 "trsvcid": "4420" 00:20:26.638 }, 00:20:26.638 "peer_address": { 00:20:26.638 "trtype": "TCP", 00:20:26.638 "adrfam": "IPv4", 00:20:26.638 "traddr": "10.0.0.1", 00:20:26.638 "trsvcid": "51074" 00:20:26.638 }, 00:20:26.638 "auth": { 00:20:26.638 "state": "completed", 00:20:26.638 "digest": "sha384", 00:20:26.638 "dhgroup": "ffdhe4096" 00:20:26.638 } 00:20:26.639 } 00:20:26.639 ]' 00:20:26.639 10:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.639 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.900 10:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.274 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.532 10:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.098 00:20:29.098 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.098 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.098 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.356 { 00:20:29.356 "cntlid": 81, 00:20:29.356 "qid": 0, 00:20:29.356 "state": "enabled", 00:20:29.356 "listen_address": { 00:20:29.356 "trtype": "TCP", 00:20:29.356 "adrfam": "IPv4", 00:20:29.356 "traddr": "10.0.0.2", 00:20:29.356 "trsvcid": "4420" 00:20:29.356 }, 00:20:29.356 "peer_address": { 00:20:29.356 "trtype": "TCP", 00:20:29.356 "adrfam": "IPv4", 00:20:29.356 "traddr": "10.0.0.1", 00:20:29.356 "trsvcid": "47462" 00:20:29.356 }, 00:20:29.356 "auth": { 00:20:29.356 "state": "completed", 00:20:29.356 "digest": "sha384", 00:20:29.356 "dhgroup": "ffdhe6144" 00:20:29.356 } 00:20:29.356 } 00:20:29.356 ]' 00:20:29.356 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.670 10:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.952 10:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:20:30.886 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.143 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.401 10:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.967 00:20:31.967 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.967 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.967 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.224 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.224 { 00:20:32.224 "cntlid": 83, 00:20:32.224 "qid": 0, 00:20:32.224 "state": "enabled", 00:20:32.224 "listen_address": { 00:20:32.224 "trtype": "TCP", 00:20:32.224 "adrfam": "IPv4", 00:20:32.224 "traddr": "10.0.0.2", 00:20:32.224 "trsvcid": "4420" 00:20:32.224 }, 00:20:32.224 "peer_address": { 00:20:32.224 "trtype": "TCP", 00:20:32.225 "adrfam": "IPv4", 00:20:32.225 "traddr": "10.0.0.1", 00:20:32.225 "trsvcid": "47486" 00:20:32.225 }, 00:20:32.225 "auth": { 00:20:32.225 "state": "completed", 00:20:32.225 "digest": "sha384", 00:20:32.225 "dhgroup": "ffdhe6144" 00:20:32.225 } 00:20:32.225 } 00:20:32.225 ]' 00:20:32.225 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.225 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.225 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.482 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.482 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.482 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.482 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.482 10:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.740 10:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.114 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.115 10:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.681 00:20:34.949 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.949 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.949 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.215 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.215 { 00:20:35.215 "cntlid": 85, 00:20:35.215 "qid": 0, 00:20:35.215 "state": "enabled", 00:20:35.215 "listen_address": { 00:20:35.215 "trtype": "TCP", 00:20:35.215 "adrfam": "IPv4", 00:20:35.215 "traddr": "10.0.0.2", 00:20:35.215 "trsvcid": "4420" 00:20:35.215 }, 00:20:35.215 "peer_address": { 00:20:35.215 "trtype": "TCP", 00:20:35.215 "adrfam": "IPv4", 00:20:35.215 "traddr": "10.0.0.1", 00:20:35.215 "trsvcid": "47504" 00:20:35.215 }, 00:20:35.215 "auth": { 00:20:35.215 "state": "completed", 00:20:35.215 "digest": "sha384", 00:20:35.215 "dhgroup": "ffdhe6144" 00:20:35.215 } 00:20:35.215 } 00:20:35.216 ]' 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.216 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.474 10:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.849 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.107 10:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.673 00:20:37.673 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.673 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.673 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.931 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.931 { 00:20:37.931 "cntlid": 87, 00:20:37.931 "qid": 0, 00:20:37.931 "state": "enabled", 00:20:37.931 "listen_address": { 00:20:37.931 "trtype": "TCP", 00:20:37.931 "adrfam": "IPv4", 00:20:37.931 "traddr": "10.0.0.2", 00:20:37.931 "trsvcid": "4420" 00:20:37.931 }, 00:20:37.931 "peer_address": { 00:20:37.931 "trtype": "TCP", 00:20:37.931 "adrfam": "IPv4", 00:20:37.931 "traddr": "10.0.0.1", 00:20:37.931 "trsvcid": "37970" 00:20:37.932 }, 00:20:37.932 "auth": { 00:20:37.932 "state": "completed", 00:20:37.932 "digest": "sha384", 00:20:37.932 "dhgroup": "ffdhe6144" 00:20:37.932 } 00:20:37.932 } 00:20:37.932 ]' 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.932 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.190 10:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.565 10:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.824 10:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.758 00:20:40.758 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.758 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.758 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.016 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.274 { 00:20:41.274 "cntlid": 89, 00:20:41.274 "qid": 0, 00:20:41.274 "state": "enabled", 00:20:41.274 "listen_address": { 00:20:41.274 "trtype": "TCP", 00:20:41.274 "adrfam": "IPv4", 00:20:41.274 "traddr": "10.0.0.2", 00:20:41.274 "trsvcid": "4420" 00:20:41.274 }, 00:20:41.274 "peer_address": { 00:20:41.274 "trtype": "TCP", 00:20:41.274 "adrfam": "IPv4", 00:20:41.274 "traddr": "10.0.0.1", 00:20:41.274 "trsvcid": "37998" 00:20:41.274 }, 00:20:41.274 "auth": { 00:20:41.274 "state": "completed", 00:20:41.274 "digest": "sha384", 00:20:41.274 "dhgroup": "ffdhe8192" 00:20:41.274 } 00:20:41.274 } 00:20:41.274 ]' 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.274 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.533 10:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.908 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.166 10:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.101 00:20:44.101 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.101 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.101 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.359 { 00:20:44.359 "cntlid": 91, 00:20:44.359 "qid": 0, 00:20:44.359 "state": "enabled", 00:20:44.359 "listen_address": { 00:20:44.359 "trtype": "TCP", 00:20:44.359 "adrfam": "IPv4", 00:20:44.359 "traddr": "10.0.0.2", 00:20:44.359 "trsvcid": "4420" 00:20:44.359 }, 00:20:44.359 "peer_address": { 00:20:44.359 "trtype": "TCP", 00:20:44.359 "adrfam": "IPv4", 00:20:44.359 "traddr": "10.0.0.1", 00:20:44.359 "trsvcid": "38024" 00:20:44.359 }, 00:20:44.359 "auth": { 00:20:44.359 "state": "completed", 00:20:44.359 "digest": "sha384", 00:20:44.359 "dhgroup": "ffdhe8192" 00:20:44.359 } 00:20:44.359 } 00:20:44.359 ]' 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.359 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.616 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.616 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.616 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.616 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.616 10:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.874 10:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:46.248 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.248 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:46.248 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.249 10:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.620 00:20:47.620 10:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.620 10:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.620 10:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.620 { 00:20:47.620 "cntlid": 93, 00:20:47.620 "qid": 0, 00:20:47.620 "state": "enabled", 00:20:47.620 "listen_address": { 00:20:47.620 "trtype": "TCP", 00:20:47.620 "adrfam": "IPv4", 00:20:47.620 "traddr": "10.0.0.2", 00:20:47.620 "trsvcid": "4420" 00:20:47.620 }, 00:20:47.620 "peer_address": { 00:20:47.620 "trtype": "TCP", 00:20:47.620 "adrfam": "IPv4", 00:20:47.620 "traddr": "10.0.0.1", 00:20:47.620 "trsvcid": "39042" 00:20:47.620 }, 00:20:47.620 "auth": { 00:20:47.620 "state": "completed", 00:20:47.620 "digest": "sha384", 00:20:47.620 "dhgroup": "ffdhe8192" 00:20:47.620 } 00:20:47.620 } 00:20:47.620 ]' 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.620 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.877 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.877 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.877 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.877 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.134 10:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:20:49.530 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.530 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:49.530 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.530 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.530 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.531 10:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.462 00:20:50.719 10:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.719 10:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.720 10:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.977 { 00:20:50.977 "cntlid": 95, 00:20:50.977 "qid": 0, 00:20:50.977 "state": "enabled", 00:20:50.977 "listen_address": { 00:20:50.977 "trtype": "TCP", 00:20:50.977 "adrfam": "IPv4", 00:20:50.977 "traddr": "10.0.0.2", 00:20:50.977 "trsvcid": "4420" 00:20:50.977 }, 00:20:50.977 "peer_address": { 00:20:50.977 "trtype": "TCP", 00:20:50.977 "adrfam": "IPv4", 00:20:50.977 "traddr": "10.0.0.1", 00:20:50.977 "trsvcid": "39064" 00:20:50.977 }, 00:20:50.977 "auth": { 00:20:50.977 "state": "completed", 00:20:50.977 "digest": "sha384", 00:20:50.977 "dhgroup": "ffdhe8192" 00:20:50.977 } 00:20:50.977 } 00:20:50.977 ]' 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.977 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.236 10:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.610 10:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.869 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.127 00:20:53.127 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.127 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.127 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.385 { 00:20:53.385 "cntlid": 97, 00:20:53.385 "qid": 0, 00:20:53.385 "state": "enabled", 00:20:53.385 "listen_address": { 00:20:53.385 "trtype": "TCP", 00:20:53.385 "adrfam": "IPv4", 00:20:53.385 "traddr": "10.0.0.2", 00:20:53.385 "trsvcid": "4420" 00:20:53.385 }, 00:20:53.385 "peer_address": { 00:20:53.385 "trtype": "TCP", 00:20:53.385 "adrfam": "IPv4", 00:20:53.385 "traddr": "10.0.0.1", 00:20:53.385 "trsvcid": "39092" 00:20:53.385 }, 00:20:53.385 "auth": { 00:20:53.385 "state": "completed", 00:20:53.385 "digest": "sha512", 00:20:53.385 "dhgroup": "null" 00:20:53.385 } 00:20:53.385 } 00:20:53.385 ]' 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.385 10:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.643 10:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.015 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.273 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.531 00:20:55.531 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.531 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.531 10:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.789 { 00:20:55.789 "cntlid": 99, 00:20:55.789 "qid": 0, 00:20:55.789 "state": "enabled", 00:20:55.789 "listen_address": { 00:20:55.789 "trtype": "TCP", 00:20:55.789 "adrfam": "IPv4", 00:20:55.789 "traddr": "10.0.0.2", 00:20:55.789 "trsvcid": "4420" 00:20:55.789 }, 00:20:55.789 "peer_address": { 00:20:55.789 "trtype": "TCP", 00:20:55.789 "adrfam": "IPv4", 00:20:55.789 "traddr": "10.0.0.1", 00:20:55.789 "trsvcid": "39120" 00:20:55.789 }, 00:20:55.789 "auth": { 00:20:55.789 "state": "completed", 00:20:55.789 "digest": "sha512", 00:20:55.789 "dhgroup": "null" 00:20:55.789 } 00:20:55.789 } 00:20:55.789 ]' 00:20:55.789 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.047 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.304 10:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.746 10:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.747 10:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.747 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.313 00:20:58.313 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.313 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.313 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.572 { 00:20:58.572 "cntlid": 101, 00:20:58.572 "qid": 0, 00:20:58.572 "state": "enabled", 00:20:58.572 "listen_address": { 00:20:58.572 "trtype": "TCP", 00:20:58.572 "adrfam": "IPv4", 00:20:58.572 "traddr": "10.0.0.2", 00:20:58.572 "trsvcid": "4420" 00:20:58.572 }, 00:20:58.572 "peer_address": { 00:20:58.572 "trtype": "TCP", 00:20:58.572 "adrfam": "IPv4", 00:20:58.572 "traddr": "10.0.0.1", 00:20:58.572 "trsvcid": "58390" 00:20:58.572 }, 00:20:58.572 "auth": { 00:20:58.572 "state": "completed", 00:20:58.572 "digest": "sha512", 00:20:58.572 "dhgroup": "null" 00:20:58.572 } 00:20:58.572 } 00:20:58.572 ]' 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.572 10:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.830 10:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.204 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.462 10:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.720 00:21:00.720 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.720 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.720 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.978 { 00:21:00.978 "cntlid": 103, 00:21:00.978 "qid": 0, 00:21:00.978 "state": "enabled", 00:21:00.978 "listen_address": { 00:21:00.978 "trtype": "TCP", 00:21:00.978 "adrfam": "IPv4", 00:21:00.978 "traddr": "10.0.0.2", 00:21:00.978 "trsvcid": "4420" 00:21:00.978 }, 00:21:00.978 "peer_address": { 00:21:00.978 "trtype": "TCP", 00:21:00.978 "adrfam": "IPv4", 00:21:00.978 "traddr": "10.0.0.1", 00:21:00.978 "trsvcid": "58420" 00:21:00.978 }, 00:21:00.978 "auth": { 00:21:00.978 "state": "completed", 00:21:00.978 "digest": "sha512", 00:21:00.978 "dhgroup": "null" 00:21:00.978 } 00:21:00.978 } 00:21:00.978 ]' 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.978 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.236 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.236 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.236 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.494 10:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.428 10:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.994 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.253 00:21:03.253 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.253 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.253 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.511 { 00:21:03.511 "cntlid": 105, 00:21:03.511 "qid": 0, 00:21:03.511 "state": "enabled", 00:21:03.511 "listen_address": { 00:21:03.511 "trtype": "TCP", 00:21:03.511 "adrfam": "IPv4", 00:21:03.511 "traddr": "10.0.0.2", 00:21:03.511 "trsvcid": "4420" 00:21:03.511 }, 00:21:03.511 "peer_address": { 00:21:03.511 "trtype": "TCP", 00:21:03.511 "adrfam": "IPv4", 00:21:03.511 "traddr": "10.0.0.1", 00:21:03.511 "trsvcid": "58452" 00:21:03.511 }, 00:21:03.511 "auth": { 00:21:03.511 "state": "completed", 00:21:03.511 "digest": "sha512", 00:21:03.511 "dhgroup": "ffdhe2048" 00:21:03.511 } 00:21:03.511 } 00:21:03.511 ]' 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.511 10:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.511 10:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.511 10:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.511 10:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.079 10:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.011 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.576 10:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.833 00:21:05.833 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.833 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.833 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.091 { 00:21:06.091 "cntlid": 107, 00:21:06.091 "qid": 0, 00:21:06.091 "state": "enabled", 00:21:06.091 "listen_address": { 00:21:06.091 "trtype": "TCP", 00:21:06.091 "adrfam": "IPv4", 00:21:06.091 "traddr": "10.0.0.2", 00:21:06.091 "trsvcid": "4420" 00:21:06.091 }, 00:21:06.091 "peer_address": { 00:21:06.091 "trtype": "TCP", 00:21:06.091 "adrfam": "IPv4", 00:21:06.091 "traddr": "10.0.0.1", 00:21:06.091 "trsvcid": "58476" 00:21:06.091 }, 00:21:06.091 "auth": { 00:21:06.091 "state": "completed", 00:21:06.091 "digest": "sha512", 00:21:06.091 "dhgroup": "ffdhe2048" 00:21:06.091 } 00:21:06.091 } 00:21:06.091 ]' 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.091 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.348 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.348 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.348 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.606 10:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:21:07.537 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.795 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.053 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.311 00:21:08.311 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.311 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.311 10:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.568 { 00:21:08.568 "cntlid": 109, 00:21:08.568 "qid": 0, 00:21:08.568 "state": "enabled", 00:21:08.568 "listen_address": { 00:21:08.568 "trtype": "TCP", 00:21:08.568 "adrfam": "IPv4", 00:21:08.568 "traddr": "10.0.0.2", 00:21:08.568 "trsvcid": "4420" 00:21:08.568 }, 00:21:08.568 "peer_address": { 00:21:08.568 "trtype": "TCP", 00:21:08.568 "adrfam": "IPv4", 00:21:08.568 "traddr": "10.0.0.1", 00:21:08.568 "trsvcid": "33538" 00:21:08.568 }, 00:21:08.568 "auth": { 00:21:08.568 "state": "completed", 00:21:08.568 "digest": "sha512", 00:21:08.568 "dhgroup": "ffdhe2048" 00:21:08.568 } 00:21:08.568 } 00:21:08.568 ]' 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.568 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.826 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.826 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.826 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.826 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.826 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.082 10:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.456 10:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.021 00:21:11.021 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.021 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.021 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.279 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.279 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.279 10:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.279 10:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.279 10:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.280 { 00:21:11.280 "cntlid": 111, 00:21:11.280 "qid": 0, 00:21:11.280 "state": "enabled", 00:21:11.280 "listen_address": { 00:21:11.280 "trtype": "TCP", 00:21:11.280 "adrfam": "IPv4", 00:21:11.280 "traddr": "10.0.0.2", 00:21:11.280 "trsvcid": "4420" 00:21:11.280 }, 00:21:11.280 "peer_address": { 00:21:11.280 "trtype": "TCP", 00:21:11.280 "adrfam": "IPv4", 00:21:11.280 "traddr": "10.0.0.1", 00:21:11.280 "trsvcid": "33560" 00:21:11.280 }, 00:21:11.280 "auth": { 00:21:11.280 "state": "completed", 00:21:11.280 "digest": "sha512", 00:21:11.280 "dhgroup": "ffdhe2048" 00:21:11.280 } 00:21:11.280 } 00:21:11.280 ]' 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.280 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.538 10:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.912 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.479 00:21:13.479 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.479 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.479 10:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.737 { 00:21:13.737 "cntlid": 113, 00:21:13.737 "qid": 0, 00:21:13.737 "state": "enabled", 00:21:13.737 "listen_address": { 00:21:13.737 "trtype": "TCP", 00:21:13.737 "adrfam": "IPv4", 00:21:13.737 "traddr": "10.0.0.2", 00:21:13.737 "trsvcid": "4420" 00:21:13.737 }, 00:21:13.737 "peer_address": { 00:21:13.737 "trtype": "TCP", 00:21:13.737 "adrfam": "IPv4", 00:21:13.737 "traddr": "10.0.0.1", 00:21:13.737 "trsvcid": "33584" 00:21:13.737 }, 00:21:13.737 "auth": { 00:21:13.737 "state": "completed", 00:21:13.737 "digest": "sha512", 00:21:13.737 "dhgroup": "ffdhe3072" 00:21:13.737 } 00:21:13.737 } 00:21:13.737 ]' 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.737 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.304 10:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.238 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.497 10:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.754 10:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.754 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.754 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.012 00:21:16.012 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.012 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.012 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.270 { 00:21:16.270 "cntlid": 115, 00:21:16.270 "qid": 0, 00:21:16.270 "state": "enabled", 00:21:16.270 "listen_address": { 00:21:16.270 "trtype": "TCP", 00:21:16.270 "adrfam": "IPv4", 00:21:16.270 "traddr": "10.0.0.2", 00:21:16.270 "trsvcid": "4420" 00:21:16.270 }, 00:21:16.270 "peer_address": { 00:21:16.270 "trtype": "TCP", 00:21:16.270 "adrfam": "IPv4", 00:21:16.270 "traddr": "10.0.0.1", 00:21:16.270 "trsvcid": "33618" 00:21:16.270 }, 00:21:16.270 "auth": { 00:21:16.270 "state": "completed", 00:21:16.270 "digest": "sha512", 00:21:16.270 "dhgroup": "ffdhe3072" 00:21:16.270 } 00:21:16.270 } 00:21:16.270 ]' 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.270 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.528 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.528 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.528 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.528 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.528 10:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.786 10:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:21:18.159 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.160 10:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.725 00:21:18.725 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.725 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.725 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.983 { 00:21:18.983 "cntlid": 117, 00:21:18.983 "qid": 0, 00:21:18.983 "state": "enabled", 00:21:18.983 "listen_address": { 00:21:18.983 "trtype": "TCP", 00:21:18.983 "adrfam": "IPv4", 00:21:18.983 "traddr": "10.0.0.2", 00:21:18.983 "trsvcid": "4420" 00:21:18.983 }, 00:21:18.983 "peer_address": { 00:21:18.983 "trtype": "TCP", 00:21:18.983 "adrfam": "IPv4", 00:21:18.983 "traddr": "10.0.0.1", 00:21:18.983 "trsvcid": "49120" 00:21:18.983 }, 00:21:18.983 "auth": { 00:21:18.983 "state": "completed", 00:21:18.983 "digest": "sha512", 00:21:18.983 "dhgroup": "ffdhe3072" 00:21:18.983 } 00:21:18.983 } 00:21:18.983 ]' 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.983 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.241 10:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.615 10:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.873 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.131 00:21:21.131 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.131 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.131 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.389 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.389 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.389 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.389 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.648 { 00:21:21.648 "cntlid": 119, 00:21:21.648 "qid": 0, 00:21:21.648 "state": "enabled", 00:21:21.648 "listen_address": { 00:21:21.648 "trtype": "TCP", 00:21:21.648 "adrfam": "IPv4", 00:21:21.648 "traddr": "10.0.0.2", 00:21:21.648 "trsvcid": "4420" 00:21:21.648 }, 00:21:21.648 "peer_address": { 00:21:21.648 "trtype": "TCP", 00:21:21.648 "adrfam": "IPv4", 00:21:21.648 "traddr": "10.0.0.1", 00:21:21.648 "trsvcid": "49150" 00:21:21.648 }, 00:21:21.648 "auth": { 00:21:21.648 "state": "completed", 00:21:21.648 "digest": "sha512", 00:21:21.648 "dhgroup": "ffdhe3072" 00:21:21.648 } 00:21:21.648 } 00:21:21.648 ]' 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.648 10:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.648 10:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.648 10:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.648 10:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.906 10:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:23.329 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.329 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.330 10:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.921 00:21:23.921 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.921 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.921 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.179 { 00:21:24.179 "cntlid": 121, 00:21:24.179 "qid": 0, 00:21:24.179 "state": "enabled", 00:21:24.179 "listen_address": { 00:21:24.179 "trtype": "TCP", 00:21:24.179 "adrfam": "IPv4", 00:21:24.179 "traddr": "10.0.0.2", 00:21:24.179 "trsvcid": "4420" 00:21:24.179 }, 00:21:24.179 "peer_address": { 00:21:24.179 "trtype": "TCP", 00:21:24.179 "adrfam": "IPv4", 00:21:24.179 "traddr": "10.0.0.1", 00:21:24.179 "trsvcid": "49172" 00:21:24.179 }, 00:21:24.179 "auth": { 00:21:24.179 "state": "completed", 00:21:24.179 "digest": "sha512", 00:21:24.179 "dhgroup": "ffdhe4096" 00:21:24.179 } 00:21:24.179 } 00:21:24.179 ]' 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.179 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.748 10:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.684 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.685 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.685 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.255 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.513 00:21:26.513 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.513 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.513 10:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.771 { 00:21:26.771 "cntlid": 123, 00:21:26.771 "qid": 0, 00:21:26.771 "state": "enabled", 00:21:26.771 "listen_address": { 00:21:26.771 "trtype": "TCP", 00:21:26.771 "adrfam": "IPv4", 00:21:26.771 "traddr": "10.0.0.2", 00:21:26.771 "trsvcid": "4420" 00:21:26.771 }, 00:21:26.771 "peer_address": { 00:21:26.771 "trtype": "TCP", 00:21:26.771 "adrfam": "IPv4", 00:21:26.771 "traddr": "10.0.0.1", 00:21:26.771 "trsvcid": "44562" 00:21:26.771 }, 00:21:26.771 "auth": { 00:21:26.771 "state": "completed", 00:21:26.771 "digest": "sha512", 00:21:26.771 "dhgroup": "ffdhe4096" 00:21:26.771 } 00:21:26.771 } 00:21:26.771 ]' 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.771 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.028 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.028 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.028 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.028 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.028 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.286 10:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.663 10:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.663 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.231 00:21:29.231 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.231 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.231 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.489 { 00:21:29.489 "cntlid": 125, 00:21:29.489 "qid": 0, 00:21:29.489 "state": "enabled", 00:21:29.489 "listen_address": { 00:21:29.489 "trtype": "TCP", 00:21:29.489 "adrfam": "IPv4", 00:21:29.489 "traddr": "10.0.0.2", 00:21:29.489 "trsvcid": "4420" 00:21:29.489 }, 00:21:29.489 "peer_address": { 00:21:29.489 "trtype": "TCP", 00:21:29.489 "adrfam": "IPv4", 00:21:29.489 "traddr": "10.0.0.1", 00:21:29.489 "trsvcid": "44582" 00:21:29.489 }, 00:21:29.489 "auth": { 00:21:29.489 "state": "completed", 00:21:29.489 "digest": "sha512", 00:21:29.489 "dhgroup": "ffdhe4096" 00:21:29.489 } 00:21:29.489 } 00:21:29.489 ]' 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.489 10:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.054 10:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.008 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.266 10:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.834 00:21:31.834 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.834 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.834 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.091 { 00:21:32.091 "cntlid": 127, 00:21:32.091 "qid": 0, 00:21:32.091 "state": "enabled", 00:21:32.091 "listen_address": { 00:21:32.091 "trtype": "TCP", 00:21:32.091 "adrfam": "IPv4", 00:21:32.091 "traddr": "10.0.0.2", 00:21:32.091 "trsvcid": "4420" 00:21:32.091 }, 00:21:32.091 "peer_address": { 00:21:32.091 "trtype": "TCP", 00:21:32.091 "adrfam": "IPv4", 00:21:32.091 "traddr": "10.0.0.1", 00:21:32.091 "trsvcid": "44602" 00:21:32.091 }, 00:21:32.091 "auth": { 00:21:32.091 "state": "completed", 00:21:32.091 "digest": "sha512", 00:21:32.091 "dhgroup": "ffdhe4096" 00:21:32.091 } 00:21:32.091 } 00:21:32.091 ]' 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.091 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.351 10:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:33.730 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.730 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.731 10:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.989 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.555 00:21:34.555 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.555 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.555 10:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.813 { 00:21:34.813 "cntlid": 129, 00:21:34.813 "qid": 0, 00:21:34.813 "state": "enabled", 00:21:34.813 "listen_address": { 00:21:34.813 "trtype": "TCP", 00:21:34.813 "adrfam": "IPv4", 00:21:34.813 "traddr": "10.0.0.2", 00:21:34.813 "trsvcid": "4420" 00:21:34.813 }, 00:21:34.813 "peer_address": { 00:21:34.813 "trtype": "TCP", 00:21:34.813 "adrfam": "IPv4", 00:21:34.813 "traddr": "10.0.0.1", 00:21:34.813 "trsvcid": "44640" 00:21:34.813 }, 00:21:34.813 "auth": { 00:21:34.813 "state": "completed", 00:21:34.813 "digest": "sha512", 00:21:34.813 "dhgroup": "ffdhe6144" 00:21:34.813 } 00:21:34.813 } 00:21:34.813 ]' 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.813 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.070 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.070 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.070 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.070 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.070 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.327 10:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.700 10:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.700 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.633 00:21:37.633 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.633 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.633 10:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.633 { 00:21:37.633 "cntlid": 131, 00:21:37.633 "qid": 0, 00:21:37.633 "state": "enabled", 00:21:37.633 "listen_address": { 00:21:37.633 "trtype": "TCP", 00:21:37.633 "adrfam": "IPv4", 00:21:37.633 "traddr": "10.0.0.2", 00:21:37.633 "trsvcid": "4420" 00:21:37.633 }, 00:21:37.633 "peer_address": { 00:21:37.633 "trtype": "TCP", 00:21:37.633 "adrfam": "IPv4", 00:21:37.633 "traddr": "10.0.0.1", 00:21:37.633 "trsvcid": "42022" 00:21:37.633 }, 00:21:37.633 "auth": { 00:21:37.633 "state": "completed", 00:21:37.633 "digest": "sha512", 00:21:37.633 "dhgroup": "ffdhe6144" 00:21:37.633 } 00:21:37.633 } 00:21:37.633 ]' 00:21:37.633 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.891 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.149 10:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.522 10:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.780 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.345 00:21:40.345 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.346 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.346 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.604 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.604 10:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.604 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.604 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 10:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.604 { 00:21:40.604 "cntlid": 133, 00:21:40.604 "qid": 0, 00:21:40.604 "state": "enabled", 00:21:40.604 "listen_address": { 00:21:40.604 "trtype": "TCP", 00:21:40.604 "adrfam": "IPv4", 00:21:40.604 "traddr": "10.0.0.2", 00:21:40.604 "trsvcid": "4420" 00:21:40.604 }, 00:21:40.604 "peer_address": { 00:21:40.604 "trtype": "TCP", 00:21:40.604 "adrfam": "IPv4", 00:21:40.604 "traddr": "10.0.0.1", 00:21:40.604 "trsvcid": "42040" 00:21:40.604 }, 00:21:40.604 "auth": { 00:21:40.604 "state": "completed", 00:21:40.604 "digest": "sha512", 00:21:40.604 "dhgroup": "ffdhe6144" 00:21:40.604 } 00:21:40.604 } 00:21:40.604 ]' 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.604 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.862 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.862 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.862 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.119 10:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.492 10:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.424 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.424 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.424 { 00:21:43.424 "cntlid": 135, 00:21:43.424 "qid": 0, 00:21:43.424 "state": "enabled", 00:21:43.424 "listen_address": { 00:21:43.424 "trtype": "TCP", 00:21:43.424 "adrfam": "IPv4", 00:21:43.424 "traddr": "10.0.0.2", 00:21:43.424 "trsvcid": "4420" 00:21:43.424 }, 00:21:43.424 "peer_address": { 00:21:43.424 "trtype": "TCP", 00:21:43.424 "adrfam": "IPv4", 00:21:43.424 "traddr": "10.0.0.1", 00:21:43.424 "trsvcid": "42076" 00:21:43.424 }, 00:21:43.425 "auth": { 00:21:43.425 "state": "completed", 00:21:43.425 "digest": "sha512", 00:21:43.425 "dhgroup": "ffdhe6144" 00:21:43.425 } 00:21:43.425 } 00:21:43.425 ]' 00:21:43.425 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.682 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.682 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.682 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.682 10:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.682 10:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.682 10:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.682 10:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.940 10:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.312 10:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.684 00:21:46.684 10:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.684 10:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.684 10:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.684 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.684 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.684 10:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.684 10:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.685 { 00:21:46.685 "cntlid": 137, 00:21:46.685 "qid": 0, 00:21:46.685 "state": "enabled", 00:21:46.685 "listen_address": { 00:21:46.685 "trtype": "TCP", 00:21:46.685 "adrfam": "IPv4", 00:21:46.685 "traddr": "10.0.0.2", 00:21:46.685 "trsvcid": "4420" 00:21:46.685 }, 00:21:46.685 "peer_address": { 00:21:46.685 "trtype": "TCP", 00:21:46.685 "adrfam": "IPv4", 00:21:46.685 "traddr": "10.0.0.1", 00:21:46.685 "trsvcid": "42106" 00:21:46.685 }, 00:21:46.685 "auth": { 00:21:46.685 "state": "completed", 00:21:46.685 "digest": "sha512", 00:21:46.685 "dhgroup": "ffdhe8192" 00:21:46.685 } 00:21:46.685 } 00:21:46.685 ]' 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.685 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.943 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.943 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.943 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.201 10:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:21:48.136 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.394 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.652 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.591 00:21:49.591 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.591 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.591 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.891 { 00:21:49.891 "cntlid": 139, 00:21:49.891 "qid": 0, 00:21:49.891 "state": "enabled", 00:21:49.891 "listen_address": { 00:21:49.891 "trtype": "TCP", 00:21:49.891 "adrfam": "IPv4", 00:21:49.891 "traddr": "10.0.0.2", 00:21:49.891 "trsvcid": "4420" 00:21:49.891 }, 00:21:49.891 "peer_address": { 00:21:49.891 "trtype": "TCP", 00:21:49.891 "adrfam": "IPv4", 00:21:49.891 "traddr": "10.0.0.1", 00:21:49.891 "trsvcid": "39376" 00:21:49.891 }, 00:21:49.891 "auth": { 00:21:49.891 "state": "completed", 00:21:49.891 "digest": "sha512", 00:21:49.891 "dhgroup": "ffdhe8192" 00:21:49.891 } 00:21:49.891 } 00:21:49.891 ]' 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.891 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.175 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.175 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.175 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.175 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.175 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.434 10:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NDBmOTNmNTc3YzdjNjM0YWRhMWQyMTc2ZDQyNWQzYjBUwUQS: --dhchap-ctrl-secret DHHC-1:02:NGM4OWUwZTIzNTFlZDk4NzI2MzViMzdhY2IzZjU2ZWFlMzg3ZGIwYWEwNzU5YTE4FAsr9Q==: 00:21:51.368 10:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.627 10:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.898 10:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.831 00:21:52.831 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.831 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.831 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.090 { 00:21:53.090 "cntlid": 141, 00:21:53.090 "qid": 0, 00:21:53.090 "state": "enabled", 00:21:53.090 "listen_address": { 00:21:53.090 "trtype": "TCP", 00:21:53.090 "adrfam": "IPv4", 00:21:53.090 "traddr": "10.0.0.2", 00:21:53.090 "trsvcid": "4420" 00:21:53.090 }, 00:21:53.090 "peer_address": { 00:21:53.090 "trtype": "TCP", 00:21:53.090 "adrfam": "IPv4", 00:21:53.090 "traddr": "10.0.0.1", 00:21:53.090 "trsvcid": "39404" 00:21:53.090 }, 00:21:53.090 "auth": { 00:21:53.090 "state": "completed", 00:21:53.090 "digest": "sha512", 00:21:53.090 "dhgroup": "ffdhe8192" 00:21:53.090 } 00:21:53.090 } 00:21:53.090 ]' 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.090 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.348 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.348 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.348 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.348 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.348 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.606 10:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:N2E0YmQ2ODJmZGNiZWUxY2U2MzlmYjE2MjM2OTA2MjcyMDYzNzY0ZWM0NmIyNTk1xMcjuA==: --dhchap-ctrl-secret DHHC-1:01:Y2M4YmI5Njc4NjkyNmViMmZjMGIwMmM1OTE1NDdjZGHSW9gB: 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.979 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.980 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.913 00:21:56.171 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.171 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.171 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.429 { 00:21:56.429 "cntlid": 143, 00:21:56.429 "qid": 0, 00:21:56.429 "state": "enabled", 00:21:56.429 "listen_address": { 00:21:56.429 "trtype": "TCP", 00:21:56.429 "adrfam": "IPv4", 00:21:56.429 "traddr": "10.0.0.2", 00:21:56.429 "trsvcid": "4420" 00:21:56.429 }, 00:21:56.429 "peer_address": { 00:21:56.429 "trtype": "TCP", 00:21:56.429 "adrfam": "IPv4", 00:21:56.429 "traddr": "10.0.0.1", 00:21:56.429 "trsvcid": "39436" 00:21:56.429 }, 00:21:56.429 "auth": { 00:21:56.429 "state": "completed", 00:21:56.429 "digest": "sha512", 00:21:56.429 "dhgroup": "ffdhe8192" 00:21:56.429 } 00:21:56.429 } 00:21:56.429 ]' 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.429 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.687 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.060 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.318 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.319 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.319 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.319 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.251 00:21:59.251 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.251 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.251 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.509 { 00:21:59.509 "cntlid": 145, 00:21:59.509 "qid": 0, 00:21:59.509 "state": "enabled", 00:21:59.509 "listen_address": { 00:21:59.509 "trtype": "TCP", 00:21:59.509 "adrfam": "IPv4", 00:21:59.509 "traddr": "10.0.0.2", 00:21:59.509 "trsvcid": "4420" 00:21:59.509 }, 00:21:59.509 "peer_address": { 00:21:59.509 "trtype": "TCP", 00:21:59.509 "adrfam": "IPv4", 00:21:59.509 "traddr": "10.0.0.1", 00:21:59.509 "trsvcid": "46550" 00:21:59.509 }, 00:21:59.509 "auth": { 00:21:59.509 "state": "completed", 00:21:59.509 "digest": "sha512", 00:21:59.509 "dhgroup": "ffdhe8192" 00:21:59.509 } 00:21:59.509 } 00:21:59.509 ]' 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.509 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.766 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.766 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.766 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.766 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.766 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.024 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MTI3Yjg1NWRkZGQyMjE0YWZhYjg0OTkwNzE1YThhMmYzZDdjYTU1YjhiOTMyZGE5YQoNIQ==: --dhchap-ctrl-secret DHHC-1:03:ZDUwOTM4ODVmMjE4MzJlODk4MWM4Y2JlMjc1Mzc4MzBkNGJhZGU3YmZmOWFiYTFlOWU4YTIxZjRmNGVjYTRkNljqHJE=: 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.396 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.329 request: 00:22:02.329 { 00:22:02.329 "name": "nvme0", 00:22:02.329 "trtype": "tcp", 00:22:02.329 "traddr": "10.0.0.2", 00:22:02.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:02.329 "adrfam": "ipv4", 00:22:02.329 "trsvcid": "4420", 00:22:02.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.329 "dhchap_key": "key2", 00:22:02.329 "method": "bdev_nvme_attach_controller", 00:22:02.329 "req_id": 1 00:22:02.329 } 00:22:02.329 Got JSON-RPC error response 00:22:02.329 response: 00:22:02.329 { 00:22:02.329 "code": -5, 00:22:02.329 "message": "Input/output error" 00:22:02.329 } 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.329 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.260 request: 00:22:03.260 { 00:22:03.260 "name": "nvme0", 00:22:03.260 "trtype": "tcp", 00:22:03.260 "traddr": "10.0.0.2", 00:22:03.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:03.260 "adrfam": "ipv4", 00:22:03.260 "trsvcid": "4420", 00:22:03.260 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.260 "dhchap_key": "key1", 00:22:03.260 "dhchap_ctrlr_key": "ckey2", 00:22:03.260 "method": "bdev_nvme_attach_controller", 00:22:03.260 "req_id": 1 00:22:03.260 } 00:22:03.260 Got JSON-RPC error response 00:22:03.260 response: 00:22:03.260 { 00:22:03.260 "code": -5, 00:22:03.260 "message": "Input/output error" 00:22:03.260 } 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.260 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.261 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.193 request: 00:22:04.193 { 00:22:04.193 "name": "nvme0", 00:22:04.193 "trtype": "tcp", 00:22:04.193 "traddr": "10.0.0.2", 00:22:04.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:04.193 "adrfam": "ipv4", 00:22:04.193 "trsvcid": "4420", 00:22:04.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.193 "dhchap_key": "key1", 00:22:04.193 "dhchap_ctrlr_key": "ckey1", 00:22:04.193 "method": "bdev_nvme_attach_controller", 00:22:04.193 "req_id": 1 00:22:04.193 } 00:22:04.193 Got JSON-RPC error response 00:22:04.193 response: 00:22:04.193 { 00:22:04.193 "code": -5, 00:22:04.193 "message": "Input/output error" 00:22:04.193 } 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3830920 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3830920 ']' 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3830920 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3830920 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3830920' 00:22:04.193 killing process with pid 3830920 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3830920 00:22:04.193 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3830920 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3851313 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3851313 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3851313 ']' 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.451 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3851313 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3851313 ']' 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.709 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.972 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.972 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.972 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:04.972 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.972 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.240 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.172 00:22:06.172 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.172 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.172 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.428 { 00:22:06.428 "cntlid": 1, 00:22:06.428 "qid": 0, 00:22:06.428 "state": "enabled", 00:22:06.428 "listen_address": { 00:22:06.428 "trtype": "TCP", 00:22:06.428 "adrfam": "IPv4", 00:22:06.428 "traddr": "10.0.0.2", 00:22:06.428 "trsvcid": "4420" 00:22:06.428 }, 00:22:06.428 "peer_address": { 00:22:06.428 "trtype": "TCP", 00:22:06.428 "adrfam": "IPv4", 00:22:06.428 "traddr": "10.0.0.1", 00:22:06.428 "trsvcid": "46598" 00:22:06.428 }, 00:22:06.428 "auth": { 00:22:06.428 "state": "completed", 00:22:06.428 "digest": "sha512", 00:22:06.428 "dhgroup": "ffdhe8192" 00:22:06.428 } 00:22:06.428 } 00:22:06.428 ]' 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.428 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.684 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.685 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.685 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.685 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.685 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.941 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MDc0ZmQ1MDNmMTBlZDMyMWE3ZjlkMmJlMDMwM2UxZmEzMjlhYjQyYjU1NDkzODAxZTk3MzQ0ODUzMmZlOTBjZELIwzk=: 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.312 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.877 request: 00:22:08.877 { 00:22:08.877 "name": "nvme0", 00:22:08.877 "trtype": "tcp", 00:22:08.877 "traddr": "10.0.0.2", 00:22:08.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:08.877 "adrfam": "ipv4", 00:22:08.877 "trsvcid": "4420", 00:22:08.877 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.877 "dhchap_key": "key3", 00:22:08.877 "method": "bdev_nvme_attach_controller", 00:22:08.877 "req_id": 1 00:22:08.877 } 00:22:08.877 Got JSON-RPC error response 00:22:08.877 response: 00:22:08.877 { 00:22:08.877 "code": -5, 00:22:08.877 "message": "Input/output error" 00:22:08.877 } 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:08.877 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.135 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.393 request: 00:22:09.393 { 00:22:09.393 "name": "nvme0", 00:22:09.393 "trtype": "tcp", 00:22:09.393 "traddr": "10.0.0.2", 00:22:09.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:09.393 "adrfam": "ipv4", 00:22:09.393 "trsvcid": "4420", 00:22:09.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.393 "dhchap_key": "key3", 00:22:09.393 "method": "bdev_nvme_attach_controller", 00:22:09.393 "req_id": 1 00:22:09.393 } 00:22:09.393 Got JSON-RPC error response 00:22:09.393 response: 00:22:09.393 { 00:22:09.393 "code": -5, 00:22:09.393 "message": "Input/output error" 00:22:09.393 } 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.393 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.651 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:09.651 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.651 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.909 request: 00:22:09.909 { 00:22:09.909 "name": "nvme0", 00:22:09.909 "trtype": "tcp", 00:22:09.909 "traddr": "10.0.0.2", 00:22:09.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:09.909 "adrfam": "ipv4", 00:22:09.909 "trsvcid": "4420", 00:22:09.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.909 "dhchap_key": "key0", 00:22:09.909 "dhchap_ctrlr_key": "key1", 00:22:09.909 "method": "bdev_nvme_attach_controller", 00:22:09.909 "req_id": 1 00:22:09.909 } 00:22:09.909 Got JSON-RPC error response 00:22:09.909 response: 00:22:09.909 { 00:22:09.909 "code": -5, 00:22:09.909 "message": "Input/output error" 00:22:09.909 } 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.909 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:10.167 00:22:10.167 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:10.167 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:10.167 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.744 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.744 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.744 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3830939 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3830939 ']' 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3830939 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3830939 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3830939' 00:22:11.002 killing process with pid 3830939 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3830939 00:22:11.002 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3830939 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.260 rmmod nvme_tcp 00:22:11.260 rmmod nvme_fabrics 00:22:11.260 rmmod nvme_keyring 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3851313 ']' 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3851313 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3851313 ']' 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3851313 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3851313 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3851313' 00:22:11.260 killing process with pid 3851313 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3851313 00:22:11.260 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3851313 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.519 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.427 10:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.427 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.f0A /tmp/spdk.key-sha256.RP3 /tmp/spdk.key-sha384.oAS /tmp/spdk.key-sha512.gQK /tmp/spdk.key-sha512.q9f /tmp/spdk.key-sha384.4k9 /tmp/spdk.key-sha256.DjB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:13.427 00:22:13.427 real 3m37.166s 00:22:13.427 user 8m25.375s 00:22:13.427 sys 0m25.980s 00:22:13.427 10:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:13.427 10:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.427 ************************************ 00:22:13.427 END TEST nvmf_auth_target 00:22:13.427 ************************************ 00:22:13.427 10:43:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:13.427 10:43:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:13.427 10:43:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:13.427 10:43:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:13.427 10:43:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.427 ************************************ 00:22:13.427 START TEST nvmf_bdevio_no_huge 00:22:13.427 ************************************ 00:22:13.427 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:13.687 * Looking for test storage... 00:22:13.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.687 10:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:15.594 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:15.594 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:15.594 Found net devices under 0000:08:00.0: cvl_0_0 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:15.594 Found net devices under 0000:08:00.1: cvl_0_1 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.594 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:22:15.594 00:22:15.594 --- 10.0.0.2 ping statistics --- 00:22:15.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.595 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:22:15.595 00:22:15.595 --- 10.0.0.1 ping statistics --- 00:22:15.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.595 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3853451 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3853451 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3853451 ']' 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.595 10:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 [2024-07-23 10:43:03.800011] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:15.595 [2024-07-23 10:43:03.800112] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:15.595 [2024-07-23 10:43:03.869383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.595 [2024-07-23 10:43:03.957776] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.595 [2024-07-23 10:43:03.957832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.595 [2024-07-23 10:43:03.957848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.595 [2024-07-23 10:43:03.957860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.595 [2024-07-23 10:43:03.957872] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.595 [2024-07-23 10:43:03.957961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.595 [2024-07-23 10:43:03.958016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:15.595 [2024-07-23 10:43:03.958065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:15.595 [2024-07-23 10:43:03.958067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 [2024-07-23 10:43:04.073916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 Malloc0 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.595 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.854 [2024-07-23 10:43:04.112289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.854 { 00:22:15.854 "params": { 00:22:15.854 "name": "Nvme$subsystem", 00:22:15.854 "trtype": "$TEST_TRANSPORT", 00:22:15.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.854 "adrfam": "ipv4", 00:22:15.854 "trsvcid": "$NVMF_PORT", 00:22:15.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.854 "hdgst": ${hdgst:-false}, 00:22:15.854 "ddgst": ${ddgst:-false} 00:22:15.854 }, 00:22:15.854 "method": "bdev_nvme_attach_controller" 00:22:15.854 } 00:22:15.854 EOF 00:22:15.854 )") 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:15.854 10:43:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.854 "params": { 00:22:15.854 "name": "Nvme1", 00:22:15.854 "trtype": "tcp", 00:22:15.854 "traddr": "10.0.0.2", 00:22:15.854 "adrfam": "ipv4", 00:22:15.854 "trsvcid": "4420", 00:22:15.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.854 "hdgst": false, 00:22:15.854 "ddgst": false 00:22:15.854 }, 00:22:15.854 "method": "bdev_nvme_attach_controller" 00:22:15.854 }' 00:22:15.854 [2024-07-23 10:43:04.159749] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:15.854 [2024-07-23 10:43:04.159844] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3853485 ] 00:22:15.854 [2024-07-23 10:43:04.219693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.854 [2024-07-23 10:43:04.307520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.854 [2024-07-23 10:43:04.307600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.854 [2024-07-23 10:43:04.307635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.112 I/O targets: 00:22:16.112 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:16.112 00:22:16.112 00:22:16.112 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.112 http://cunit.sourceforge.net/ 00:22:16.112 00:22:16.112 00:22:16.112 Suite: bdevio tests on: Nvme1n1 00:22:16.112 Test: blockdev write read block ...passed 00:22:16.370 Test: blockdev write zeroes read block ...passed 00:22:16.370 Test: blockdev write zeroes read no split ...passed 00:22:16.370 Test: blockdev write zeroes read split ...passed 00:22:16.370 Test: blockdev write zeroes read split partial ...passed 00:22:16.371 Test: blockdev reset ...[2024-07-23 10:43:04.708857] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.371 [2024-07-23 10:43:04.709019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e8ca0 (9): Bad file descriptor 00:22:16.371 [2024-07-23 10:43:04.806599] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:16.371 passed 00:22:16.371 Test: blockdev write read 8 blocks ...passed 00:22:16.371 Test: blockdev write read size > 128k ...passed 00:22:16.371 Test: blockdev write read invalid size ...passed 00:22:16.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.371 Test: blockdev write read max offset ...passed 00:22:16.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.628 Test: blockdev writev readv 8 blocks ...passed 00:22:16.628 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.628 Test: blockdev writev readv block ...passed 00:22:16.628 Test: blockdev writev readv size > 128k ...passed 00:22:16.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.628 Test: blockdev comparev and writev ...[2024-07-23 10:43:05.020122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.020164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.020192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.020210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.020576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.020602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.020626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.020644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.020972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.020996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.021020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.021044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.021382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.021407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.021430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.628 [2024-07-23 10:43:05.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:16.628 passed 00:22:16.628 Test: blockdev nvme passthru rw ...passed 00:22:16.628 Test: blockdev nvme passthru vendor specific ...[2024-07-23 10:43:05.104766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.628 [2024-07-23 10:43:05.104796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.104950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.628 [2024-07-23 10:43:05.104974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.105130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.628 [2024-07-23 10:43:05.105154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:16.628 [2024-07-23 10:43:05.105322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.628 [2024-07-23 10:43:05.105345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:16.628 passed 00:22:16.629 Test: blockdev nvme admin passthru ...passed 00:22:16.923 Test: blockdev copy ...passed 00:22:16.923 00:22:16.923 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.923 suites 1 1 n/a 0 0 00:22:16.923 tests 23 23 23 0 0 00:22:16.923 asserts 152 152 152 0 n/a 00:22:16.923 00:22:16.923 Elapsed time = 1.221 seconds 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:17.207 rmmod nvme_tcp 00:22:17.207 rmmod nvme_fabrics 00:22:17.207 rmmod nvme_keyring 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3853451 ']' 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3853451 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3853451 ']' 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3853451 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:17.207 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853451 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853451' 00:22:17.208 killing process with pid 3853451 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3853451 00:22:17.208 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3853451 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.467 10:43:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.000 10:43:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.000 00:22:20.000 real 0m6.066s 00:22:20.000 user 0m10.402s 00:22:20.000 sys 0m2.291s 00:22:20.000 10:43:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:20.000 10:43:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.000 ************************************ 00:22:20.000 END TEST nvmf_bdevio_no_huge 00:22:20.000 ************************************ 00:22:20.000 10:43:08 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.000 10:43:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:20.000 10:43:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:20.000 10:43:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.000 ************************************ 00:22:20.000 START TEST nvmf_tls 00:22:20.000 ************************************ 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.000 * Looking for test storage... 00:22:20.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.000 10:43:08 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.001 10:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.377 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.377 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.377 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.377 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:21.378 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:21.378 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:21.378 Found net devices under 0000:08:00.0: cvl_0_0 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:21.378 Found net devices under 0000:08:00.1: cvl_0_1 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:22:21.378 00:22:21.378 --- 10.0.0.2 ping statistics --- 00:22:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.378 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:21.378 00:22:21.378 --- 10.0.0.1 ping statistics --- 00:22:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.378 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:21.378 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3855103 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3855103 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3855103 ']' 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.379 10:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.637 [2024-07-23 10:43:09.923682] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:21.637 [2024-07-23 10:43:09.923778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.637 [2024-07-23 10:43:09.992006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.637 [2024-07-23 10:43:10.080258] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.637 [2024-07-23 10:43:10.080325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.637 [2024-07-23 10:43:10.080342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.637 [2024-07-23 10:43:10.080364] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.637 [2024-07-23 10:43:10.080376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.637 [2024-07-23 10:43:10.080408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:21.895 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:22.153 true 00:22:22.153 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.153 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:22.411 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:22.411 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:22.411 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.670 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.670 10:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:22.928 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:22.928 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:22.928 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:23.185 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:23.185 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.443 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:23.443 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:23.443 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.443 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:23.701 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:23.701 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:23.701 10:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:23.958 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.958 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:23.958 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:23.958 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:23.958 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.216 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:24.216 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.474 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.732 10:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.9fWfHQVBsq 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Pt46tBpyFY 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.9fWfHQVBsq 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Pt46tBpyFY 00:22:24.732 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:24.989 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.246 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.9fWfHQVBsq 00:22:25.246 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9fWfHQVBsq 00:22:25.246 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.504 [2024-07-23 10:43:13.809467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.504 10:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.762 10:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.020 [2024-07-23 10:43:14.282860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.020 [2024-07-23 10:43:14.283067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.020 10:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.277 malloc0 00:22:26.277 10:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.535 10:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9fWfHQVBsq 00:22:26.535 [2024-07-23 10:43:15.009736] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.535 10:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9fWfHQVBsq 00:22:26.792 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.760 Initializing NVMe Controllers 00:22:36.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.760 Initialization complete. Launching workers. 00:22:36.760 ======================================================== 00:22:36.760 Latency(us) 00:22:36.760 Device Information : IOPS MiB/s Average min max 00:22:36.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7577.89 29.60 8448.44 1198.94 12378.23 00:22:36.760 ======================================================== 00:22:36.760 Total : 7577.89 29.60 8448.44 1198.94 12378.23 00:22:36.760 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9fWfHQVBsq 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9fWfHQVBsq' 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3856530 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3856530 /var/tmp/bdevperf.sock 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3856530 ']' 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.760 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.760 [2024-07-23 10:43:25.183072] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:36.760 [2024-07-23 10:43:25.183167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856530 ] 00:22:36.760 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.760 [2024-07-23 10:43:25.244356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.018 [2024-07-23 10:43:25.334725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.018 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.018 10:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:37.018 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9fWfHQVBsq 00:22:37.277 [2024-07-23 10:43:25.708276] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.277 [2024-07-23 10:43:25.708406] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.535 TLSTESTn1 00:22:37.535 10:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.535 Running I/O for 10 seconds... 00:22:47.504 00:22:47.504 Latency(us) 00:22:47.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.504 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.504 Verification LBA range: start 0x0 length 0x2000 00:22:47.504 TLSTESTn1 : 10.04 3092.47 12.08 0.00 0.00 41290.76 9029.40 54758.97 00:22:47.504 =================================================================================================================== 00:22:47.504 Total : 3092.47 12.08 0.00 0.00 41290.76 9029.40 54758.97 00:22:47.504 0 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3856530 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3856530 ']' 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3856530 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.504 10:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3856530 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3856530' 00:22:47.762 killing process with pid 3856530 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3856530 00:22:47.762 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.762 00:22:47.762 Latency(us) 00:22:47.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.762 =================================================================================================================== 00:22:47.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.762 [2024-07-23 10:43:36.016944] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3856530 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pt46tBpyFY 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pt46tBpyFY 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pt46tBpyFY 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Pt46tBpyFY' 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3857521 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3857521 /var/tmp/bdevperf.sock 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3857521 ']' 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.762 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.762 [2024-07-23 10:43:36.234061] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:47.762 [2024-07-23 10:43:36.234157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857521 ] 00:22:47.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.019 [2024-07-23 10:43:36.295742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.019 [2024-07-23 10:43:36.383682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.019 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.019 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.019 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pt46tBpyFY 00:22:48.276 [2024-07-23 10:43:36.761224] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.276 [2024-07-23 10:43:36.761345] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.277 [2024-07-23 10:43:36.771765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.277 [2024-07-23 10:43:36.771981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a6c0 (107): Transport endpoint is not connected 00:22:48.277 [2024-07-23 10:43:36.772963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a6c0 (9): Bad file descriptor 00:22:48.277 [2024-07-23 10:43:36.773962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.277 [2024-07-23 10:43:36.773983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.277 [2024-07-23 10:43:36.774000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.277 request: 00:22:48.277 { 00:22:48.277 "name": "TLSTEST", 00:22:48.277 "trtype": "tcp", 00:22:48.277 "traddr": "10.0.0.2", 00:22:48.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.277 "adrfam": "ipv4", 00:22:48.277 "trsvcid": "4420", 00:22:48.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.277 "psk": "/tmp/tmp.Pt46tBpyFY", 00:22:48.277 "method": "bdev_nvme_attach_controller", 00:22:48.277 "req_id": 1 00:22:48.277 } 00:22:48.277 Got JSON-RPC error response 00:22:48.277 response: 00:22:48.277 { 00:22:48.277 "code": -5, 00:22:48.277 "message": "Input/output error" 00:22:48.277 } 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3857521 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3857521 ']' 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3857521 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3857521 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3857521' 00:22:48.534 killing process with pid 3857521 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3857521 00:22:48.534 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.534 00:22:48.534 Latency(us) 00:22:48.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.534 =================================================================================================================== 00:22:48.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.534 [2024-07-23 10:43:36.821228] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3857521 00:22:48.534 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9fWfHQVBsq 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9fWfHQVBsq 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9fWfHQVBsq 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9fWfHQVBsq' 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3857639 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3857639 /var/tmp/bdevperf.sock 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3857639 ']' 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.535 10:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.535 [2024-07-23 10:43:37.011308] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:48.535 [2024-07-23 10:43:37.011405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857639 ] 00:22:48.792 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.792 [2024-07-23 10:43:37.079593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.792 [2024-07-23 10:43:37.175069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.049 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.049 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.049 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.9fWfHQVBsq 00:22:49.307 [2024-07-23 10:43:37.579787] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.307 [2024-07-23 10:43:37.579890] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.307 [2024-07-23 10:43:37.584866] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.307 [2024-07-23 10:43:37.584909] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.307 [2024-07-23 10:43:37.584955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.307 [2024-07-23 10:43:37.585570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13216c0 (107): Transport endpoint is not connected 00:22:49.307 [2024-07-23 10:43:37.586560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13216c0 (9): Bad file descriptor 00:22:49.307 [2024-07-23 10:43:37.587559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.307 [2024-07-23 10:43:37.587577] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.307 [2024-07-23 10:43:37.587592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.307 request: 00:22:49.307 { 00:22:49.307 "name": "TLSTEST", 00:22:49.307 "trtype": "tcp", 00:22:49.307 "traddr": "10.0.0.2", 00:22:49.307 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:49.307 "adrfam": "ipv4", 00:22:49.307 "trsvcid": "4420", 00:22:49.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.307 "psk": "/tmp/tmp.9fWfHQVBsq", 00:22:49.307 "method": "bdev_nvme_attach_controller", 00:22:49.307 "req_id": 1 00:22:49.307 } 00:22:49.307 Got JSON-RPC error response 00:22:49.307 response: 00:22:49.307 { 00:22:49.307 "code": -5, 00:22:49.307 "message": "Input/output error" 00:22:49.307 } 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3857639 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3857639 ']' 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3857639 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3857639 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3857639' 00:22:49.307 killing process with pid 3857639 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3857639 00:22:49.307 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.307 00:22:49.307 Latency(us) 00:22:49.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.307 =================================================================================================================== 00:22:49.307 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.307 [2024-07-23 10:43:37.632339] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3857639 00:22:49.307 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9fWfHQVBsq 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9fWfHQVBsq 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9fWfHQVBsq 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9fWfHQVBsq' 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3857741 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3857741 /var/tmp/bdevperf.sock 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3857741 ']' 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.308 10:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 [2024-07-23 10:43:37.816121] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:49.565 [2024-07-23 10:43:37.816214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857741 ] 00:22:49.565 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.565 [2024-07-23 10:43:37.873324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.565 [2024-07-23 10:43:37.949004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.565 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.565 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.565 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9fWfHQVBsq 00:22:49.823 [2024-07-23 10:43:38.289235] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.823 [2024-07-23 10:43:38.289335] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.823 [2024-07-23 10:43:38.295445] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.823 [2024-07-23 10:43:38.295473] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.823 [2024-07-23 10:43:38.295528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.823 [2024-07-23 10:43:38.295803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7b6c0 (107): Transport endpoint is not connected 00:22:49.823 [2024-07-23 10:43:38.296794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7b6c0 (9): Bad file descriptor 00:22:49.823 [2024-07-23 10:43:38.297794] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:49.823 [2024-07-23 10:43:38.297811] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.823 [2024-07-23 10:43:38.297826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:49.823 request: 00:22:49.823 { 00:22:49.823 "name": "TLSTEST", 00:22:49.823 "trtype": "tcp", 00:22:49.823 "traddr": "10.0.0.2", 00:22:49.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.823 "adrfam": "ipv4", 00:22:49.823 "trsvcid": "4420", 00:22:49.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:49.823 "psk": "/tmp/tmp.9fWfHQVBsq", 00:22:49.823 "method": "bdev_nvme_attach_controller", 00:22:49.823 "req_id": 1 00:22:49.823 } 00:22:49.823 Got JSON-RPC error response 00:22:49.823 response: 00:22:49.823 { 00:22:49.823 "code": -5, 00:22:49.823 "message": "Input/output error" 00:22:49.823 } 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3857741 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3857741 ']' 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3857741 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.823 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3857741 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3857741' 00:22:50.081 killing process with pid 3857741 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3857741 00:22:50.081 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.081 00:22:50.081 Latency(us) 00:22:50.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.081 =================================================================================================================== 00:22:50.081 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.081 [2024-07-23 10:43:38.339038] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3857741 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3857758 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3857758 /var/tmp/bdevperf.sock 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3857758 ']' 00:22:50.081 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.082 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.082 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.082 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.082 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.082 [2024-07-23 10:43:38.525277] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:50.082 [2024-07-23 10:43:38.525350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857758 ] 00:22:50.082 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.082 [2024-07-23 10:43:38.576988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.339 [2024-07-23 10:43:38.659680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.339 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.339 10:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.339 10:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:50.597 [2024-07-23 10:43:39.037457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.597 [2024-07-23 10:43:39.039303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c5aa0 (9): Bad file descriptor 00:22:50.597 [2024-07-23 10:43:39.040275] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.597 [2024-07-23 10:43:39.040294] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.597 [2024-07-23 10:43:39.040309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.597 request: 00:22:50.597 { 00:22:50.597 "name": "TLSTEST", 00:22:50.597 "trtype": "tcp", 00:22:50.597 "traddr": "10.0.0.2", 00:22:50.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.597 "adrfam": "ipv4", 00:22:50.597 "trsvcid": "4420", 00:22:50.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.597 "method": "bdev_nvme_attach_controller", 00:22:50.597 "req_id": 1 00:22:50.597 } 00:22:50.597 Got JSON-RPC error response 00:22:50.597 response: 00:22:50.597 { 00:22:50.597 "code": -5, 00:22:50.597 "message": "Input/output error" 00:22:50.597 } 00:22:50.597 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3857758 00:22:50.597 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3857758 ']' 00:22:50.597 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3857758 00:22:50.597 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3857758 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3857758' 00:22:50.598 killing process with pid 3857758 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3857758 00:22:50.598 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.598 00:22:50.598 Latency(us) 00:22:50.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.598 =================================================================================================================== 00:22:50.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.598 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3857758 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3855103 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3855103 ']' 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3855103 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3855103 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3855103' 00:22:50.856 killing process with pid 3855103 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3855103 00:22:50.856 [2024-07-23 10:43:39.253020] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.856 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3855103 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.R4Eewn4ize 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.R4Eewn4ize 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3857875 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3857875 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3857875 ']' 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.114 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.114 [2024-07-23 10:43:39.485246] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:51.114 [2024-07-23 10:43:39.485331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.114 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.114 [2024-07-23 10:43:39.536263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.114 [2024-07-23 10:43:39.608205] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.114 [2024-07-23 10:43:39.608247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.114 [2024-07-23 10:43:39.608260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.114 [2024-07-23 10:43:39.608271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.114 [2024-07-23 10:43:39.608280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.114 [2024-07-23 10:43:39.608304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.R4Eewn4ize 00:22:51.372 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.629 [2024-07-23 10:43:39.970446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.629 10:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.886 10:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.144 [2024-07-23 10:43:40.451725] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.144 [2024-07-23 10:43:40.451927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.144 10:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.401 malloc0 00:22:52.401 10:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.659 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:22:52.916 [2024-07-23 10:43:41.287180] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R4Eewn4ize 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.R4Eewn4ize' 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3858095 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3858095 /var/tmp/bdevperf.sock 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3858095 ']' 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.916 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.916 [2024-07-23 10:43:41.354934] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:52.916 [2024-07-23 10:43:41.355033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858095 ] 00:22:52.916 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.916 [2024-07-23 10:43:41.416278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.173 [2024-07-23 10:43:41.504461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.173 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.173 10:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.173 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:22:53.429 [2024-07-23 10:43:41.883590] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.429 [2024-07-23 10:43:41.883702] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.685 TLSTESTn1 00:22:53.685 10:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.685 Running I/O for 10 seconds... 00:23:03.672 00:23:03.672 Latency(us) 00:23:03.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.672 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.672 Verification LBA range: start 0x0 length 0x2000 00:23:03.672 TLSTESTn1 : 10.03 3552.07 13.88 0.00 0.00 35959.69 8204.14 44467.39 00:23:03.672 =================================================================================================================== 00:23:03.672 Total : 3552.07 13.88 0.00 0.00 35959.69 8204.14 44467.39 00:23:03.672 0 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3858095 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3858095 ']' 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3858095 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3858095 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:03.672 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3858095' 00:23:03.930 killing process with pid 3858095 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3858095 00:23:03.930 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.930 00:23:03.930 Latency(us) 00:23:03.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.930 =================================================================================================================== 00:23:03.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.930 [2024-07-23 10:43:52.175352] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3858095 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.R4Eewn4ize 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R4Eewn4ize 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R4Eewn4ize 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R4Eewn4ize 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.R4Eewn4ize' 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3859095 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3859095 /var/tmp/bdevperf.sock 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859095 ']' 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.930 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.930 [2024-07-23 10:43:52.380625] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:03.930 [2024-07-23 10:43:52.380718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859095 ] 00:23:03.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.187 [2024-07-23 10:43:52.437381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.187 [2024-07-23 10:43:52.512987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.187 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.187 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:04.188 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:23:04.445 [2024-07-23 10:43:52.905765] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.445 [2024-07-23 10:43:52.905839] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:04.445 [2024-07-23 10:43:52.905854] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.R4Eewn4ize 00:23:04.445 request: 00:23:04.445 { 00:23:04.445 "name": "TLSTEST", 00:23:04.445 "trtype": "tcp", 00:23:04.445 "traddr": "10.0.0.2", 00:23:04.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.445 "adrfam": "ipv4", 00:23:04.445 "trsvcid": "4420", 00:23:04.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.445 "psk": "/tmp/tmp.R4Eewn4ize", 00:23:04.445 "method": "bdev_nvme_attach_controller", 00:23:04.445 "req_id": 1 00:23:04.445 } 00:23:04.445 Got JSON-RPC error response 00:23:04.445 response: 00:23:04.445 { 00:23:04.445 "code": -1, 00:23:04.445 "message": "Operation not permitted" 00:23:04.445 } 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3859095 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859095 ']' 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859095 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.445 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859095 00:23:04.703 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:04.703 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:04.703 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859095' 00:23:04.703 killing process with pid 3859095 00:23:04.703 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859095 00:23:04.703 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.703 00:23:04.703 Latency(us) 00:23:04.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.703 =================================================================================================================== 00:23:04.703 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.703 10:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859095 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3857875 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3857875 ']' 00:23:04.703 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3857875 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3857875 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3857875' 00:23:04.704 killing process with pid 3857875 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3857875 00:23:04.704 [2024-07-23 10:43:53.119902] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.704 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3857875 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3859204 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3859204 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859204 ']' 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.960 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.960 [2024-07-23 10:43:53.330232] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:04.960 [2024-07-23 10:43:53.330331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.960 [2024-07-23 10:43:53.389715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.218 [2024-07-23 10:43:53.470292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.218 [2024-07-23 10:43:53.470348] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.218 [2024-07-23 10:43:53.470361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.218 [2024-07-23 10:43:53.470372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.218 [2024-07-23 10:43:53.470382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.218 [2024-07-23 10:43:53.470416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.R4Eewn4ize 00:23:05.218 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:05.475 [2024-07-23 10:43:53.863453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.475 10:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.733 10:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.991 [2024-07-23 10:43:54.340760] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.991 [2024-07-23 10:43:54.340979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.991 10:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.249 malloc0 00:23:06.249 10:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.507 10:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:23:06.765 [2024-07-23 10:43:55.164345] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:06.765 [2024-07-23 10:43:55.164383] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:06.765 [2024-07-23 10:43:55.164422] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:06.765 request: 00:23:06.765 { 00:23:06.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.765 "host": "nqn.2016-06.io.spdk:host1", 00:23:06.765 "psk": "/tmp/tmp.R4Eewn4ize", 00:23:06.765 "method": "nvmf_subsystem_add_host", 00:23:06.765 "req_id": 1 00:23:06.765 } 00:23:06.765 Got JSON-RPC error response 00:23:06.765 response: 00:23:06.765 { 00:23:06.765 "code": -32603, 00:23:06.765 "message": "Internal error" 00:23:06.765 } 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3859204 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859204 ']' 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859204 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859204 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859204' 00:23:06.765 killing process with pid 3859204 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859204 00:23:06.765 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859204 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.R4Eewn4ize 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3859432 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3859432 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859432 ']' 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.023 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.023 [2024-07-23 10:43:55.412975] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:07.023 [2024-07-23 10:43:55.413059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.023 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.023 [2024-07-23 10:43:55.465656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.281 [2024-07-23 10:43:55.538305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.281 [2024-07-23 10:43:55.538364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.281 [2024-07-23 10:43:55.538377] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.281 [2024-07-23 10:43:55.538388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.281 [2024-07-23 10:43:55.538407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.281 [2024-07-23 10:43:55.538434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.R4Eewn4ize 00:23:07.281 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.539 [2024-07-23 10:43:55.961799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.539 10:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.796 10:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.054 [2024-07-23 10:43:56.551367] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.054 [2024-07-23 10:43:56.551602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.313 10:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.571 malloc0 00:23:08.571 10:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.829 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:23:09.087 [2024-07-23 10:43:57.370785] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3859649 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3859649 /var/tmp/bdevperf.sock 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859649 ']' 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.087 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.087 [2024-07-23 10:43:57.421253] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:09.087 [2024-07-23 10:43:57.421332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859649 ] 00:23:09.087 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.087 [2024-07-23 10:43:57.469386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.087 [2024-07-23 10:43:57.541491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.345 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.345 10:43:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.345 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:23:09.603 [2024-07-23 10:43:57.853684] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.603 [2024-07-23 10:43:57.853785] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.603 TLSTESTn1 00:23:09.603 10:43:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:09.861 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:09.861 "subsystems": [ 00:23:09.861 { 00:23:09.861 "subsystem": "keyring", 00:23:09.861 "config": [] 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "subsystem": "iobuf", 00:23:09.861 "config": [ 00:23:09.861 { 00:23:09.861 "method": "iobuf_set_options", 00:23:09.861 "params": { 00:23:09.861 "small_pool_count": 8192, 00:23:09.861 "large_pool_count": 1024, 00:23:09.861 "small_bufsize": 8192, 00:23:09.861 "large_bufsize": 135168 00:23:09.861 } 00:23:09.861 } 00:23:09.861 ] 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "subsystem": "sock", 00:23:09.861 "config": [ 00:23:09.861 { 00:23:09.861 "method": "sock_set_default_impl", 00:23:09.861 "params": { 00:23:09.861 "impl_name": "posix" 00:23:09.861 } 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "method": "sock_impl_set_options", 00:23:09.861 "params": { 00:23:09.861 "impl_name": "ssl", 00:23:09.861 "recv_buf_size": 4096, 00:23:09.861 "send_buf_size": 4096, 00:23:09.861 "enable_recv_pipe": true, 00:23:09.861 "enable_quickack": false, 00:23:09.861 "enable_placement_id": 0, 00:23:09.861 "enable_zerocopy_send_server": true, 00:23:09.861 "enable_zerocopy_send_client": false, 00:23:09.861 "zerocopy_threshold": 0, 00:23:09.861 "tls_version": 0, 00:23:09.861 "enable_ktls": false 00:23:09.861 } 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "method": "sock_impl_set_options", 00:23:09.861 "params": { 00:23:09.861 "impl_name": "posix", 00:23:09.861 "recv_buf_size": 2097152, 00:23:09.861 "send_buf_size": 2097152, 00:23:09.861 "enable_recv_pipe": true, 00:23:09.861 "enable_quickack": false, 00:23:09.861 "enable_placement_id": 0, 00:23:09.861 "enable_zerocopy_send_server": true, 00:23:09.861 "enable_zerocopy_send_client": false, 00:23:09.861 "zerocopy_threshold": 0, 00:23:09.861 "tls_version": 0, 00:23:09.861 "enable_ktls": false 00:23:09.861 } 00:23:09.861 } 00:23:09.861 ] 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "subsystem": "vmd", 00:23:09.861 "config": [] 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "subsystem": "accel", 00:23:09.861 "config": [ 00:23:09.861 { 00:23:09.861 "method": "accel_set_options", 00:23:09.861 "params": { 00:23:09.861 "small_cache_size": 128, 00:23:09.861 "large_cache_size": 16, 00:23:09.861 "task_count": 2048, 00:23:09.861 "sequence_count": 2048, 00:23:09.861 "buf_count": 2048 00:23:09.861 } 00:23:09.861 } 00:23:09.861 ] 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "subsystem": "bdev", 00:23:09.861 "config": [ 00:23:09.861 { 00:23:09.861 "method": "bdev_set_options", 00:23:09.861 "params": { 00:23:09.861 "bdev_io_pool_size": 65535, 00:23:09.861 "bdev_io_cache_size": 256, 00:23:09.861 "bdev_auto_examine": true, 00:23:09.861 "iobuf_small_cache_size": 128, 00:23:09.861 "iobuf_large_cache_size": 16 00:23:09.861 } 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "method": "bdev_raid_set_options", 00:23:09.861 "params": { 00:23:09.861 "process_window_size_kb": 1024 00:23:09.861 } 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "method": "bdev_iscsi_set_options", 00:23:09.861 "params": { 00:23:09.861 "timeout_sec": 30 00:23:09.861 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "bdev_nvme_set_options", 00:23:09.862 "params": { 00:23:09.862 "action_on_timeout": "none", 00:23:09.862 "timeout_us": 0, 00:23:09.862 "timeout_admin_us": 0, 00:23:09.862 "keep_alive_timeout_ms": 10000, 00:23:09.862 "arbitration_burst": 0, 00:23:09.862 "low_priority_weight": 0, 00:23:09.862 "medium_priority_weight": 0, 00:23:09.862 "high_priority_weight": 0, 00:23:09.862 "nvme_adminq_poll_period_us": 10000, 00:23:09.862 "nvme_ioq_poll_period_us": 0, 00:23:09.862 "io_queue_requests": 0, 00:23:09.862 "delay_cmd_submit": true, 00:23:09.862 "transport_retry_count": 4, 00:23:09.862 "bdev_retry_count": 3, 00:23:09.862 "transport_ack_timeout": 0, 00:23:09.862 "ctrlr_loss_timeout_sec": 0, 00:23:09.862 "reconnect_delay_sec": 0, 00:23:09.862 "fast_io_fail_timeout_sec": 0, 00:23:09.862 "disable_auto_failback": false, 00:23:09.862 "generate_uuids": false, 00:23:09.862 "transport_tos": 0, 00:23:09.862 "nvme_error_stat": false, 00:23:09.862 "rdma_srq_size": 0, 00:23:09.862 "io_path_stat": false, 00:23:09.862 "allow_accel_sequence": false, 00:23:09.862 "rdma_max_cq_size": 0, 00:23:09.862 "rdma_cm_event_timeout_ms": 0, 00:23:09.862 "dhchap_digests": [ 00:23:09.862 "sha256", 00:23:09.862 "sha384", 00:23:09.862 "sha512" 00:23:09.862 ], 00:23:09.862 "dhchap_dhgroups": [ 00:23:09.862 "null", 00:23:09.862 "ffdhe2048", 00:23:09.862 "ffdhe3072", 00:23:09.862 "ffdhe4096", 00:23:09.862 "ffdhe6144", 00:23:09.862 "ffdhe8192" 00:23:09.862 ] 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "bdev_nvme_set_hotplug", 00:23:09.862 "params": { 00:23:09.862 "period_us": 100000, 00:23:09.862 "enable": false 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "bdev_malloc_create", 00:23:09.862 "params": { 00:23:09.862 "name": "malloc0", 00:23:09.862 "num_blocks": 8192, 00:23:09.862 "block_size": 4096, 00:23:09.862 "physical_block_size": 4096, 00:23:09.862 "uuid": "8b891acd-f18e-4b36-8715-6926b0c8ce6c", 00:23:09.862 "optimal_io_boundary": 0 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "bdev_wait_for_examine" 00:23:09.862 } 00:23:09.862 ] 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "subsystem": "nbd", 00:23:09.862 "config": [] 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "subsystem": "scheduler", 00:23:09.862 "config": [ 00:23:09.862 { 00:23:09.862 "method": "framework_set_scheduler", 00:23:09.862 "params": { 00:23:09.862 "name": "static" 00:23:09.862 } 00:23:09.862 } 00:23:09.862 ] 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "subsystem": "nvmf", 00:23:09.862 "config": [ 00:23:09.862 { 00:23:09.862 "method": "nvmf_set_config", 00:23:09.862 "params": { 00:23:09.862 "discovery_filter": "match_any", 00:23:09.862 "admin_cmd_passthru": { 00:23:09.862 "identify_ctrlr": false 00:23:09.862 } 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_set_max_subsystems", 00:23:09.862 "params": { 00:23:09.862 "max_subsystems": 1024 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_set_crdt", 00:23:09.862 "params": { 00:23:09.862 "crdt1": 0, 00:23:09.862 "crdt2": 0, 00:23:09.862 "crdt3": 0 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_create_transport", 00:23:09.862 "params": { 00:23:09.862 "trtype": "TCP", 00:23:09.862 "max_queue_depth": 128, 00:23:09.862 "max_io_qpairs_per_ctrlr": 127, 00:23:09.862 "in_capsule_data_size": 4096, 00:23:09.862 "max_io_size": 131072, 00:23:09.862 "io_unit_size": 131072, 00:23:09.862 "max_aq_depth": 128, 00:23:09.862 "num_shared_buffers": 511, 00:23:09.862 "buf_cache_size": 4294967295, 00:23:09.862 "dif_insert_or_strip": false, 00:23:09.862 "zcopy": false, 00:23:09.862 "c2h_success": false, 00:23:09.862 "sock_priority": 0, 00:23:09.862 "abort_timeout_sec": 1, 00:23:09.862 "ack_timeout": 0, 00:23:09.862 "data_wr_pool_size": 0 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_create_subsystem", 00:23:09.862 "params": { 00:23:09.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.862 "allow_any_host": false, 00:23:09.862 "serial_number": "SPDK00000000000001", 00:23:09.862 "model_number": "SPDK bdev Controller", 00:23:09.862 "max_namespaces": 10, 00:23:09.862 "min_cntlid": 1, 00:23:09.862 "max_cntlid": 65519, 00:23:09.862 "ana_reporting": false 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_subsystem_add_host", 00:23:09.862 "params": { 00:23:09.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.862 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.862 "psk": "/tmp/tmp.R4Eewn4ize" 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_subsystem_add_ns", 00:23:09.862 "params": { 00:23:09.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.862 "namespace": { 00:23:09.862 "nsid": 1, 00:23:09.862 "bdev_name": "malloc0", 00:23:09.862 "nguid": "8B891ACDF18E4B3687156926B0C8CE6C", 00:23:09.862 "uuid": "8b891acd-f18e-4b36-8715-6926b0c8ce6c", 00:23:09.862 "no_auto_visible": false 00:23:09.862 } 00:23:09.862 } 00:23:09.862 }, 00:23:09.862 { 00:23:09.862 "method": "nvmf_subsystem_add_listener", 00:23:09.862 "params": { 00:23:09.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.862 "listen_address": { 00:23:09.862 "trtype": "TCP", 00:23:09.862 "adrfam": "IPv4", 00:23:09.862 "traddr": "10.0.0.2", 00:23:09.862 "trsvcid": "4420" 00:23:09.862 }, 00:23:09.862 "secure_channel": true 00:23:09.862 } 00:23:09.862 } 00:23:09.862 ] 00:23:09.862 } 00:23:09.862 ] 00:23:09.862 }' 00:23:09.862 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:10.121 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:10.121 "subsystems": [ 00:23:10.121 { 00:23:10.121 "subsystem": "keyring", 00:23:10.121 "config": [] 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "subsystem": "iobuf", 00:23:10.121 "config": [ 00:23:10.121 { 00:23:10.121 "method": "iobuf_set_options", 00:23:10.121 "params": { 00:23:10.121 "small_pool_count": 8192, 00:23:10.121 "large_pool_count": 1024, 00:23:10.121 "small_bufsize": 8192, 00:23:10.121 "large_bufsize": 135168 00:23:10.121 } 00:23:10.121 } 00:23:10.121 ] 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "subsystem": "sock", 00:23:10.121 "config": [ 00:23:10.121 { 00:23:10.121 "method": "sock_set_default_impl", 00:23:10.121 "params": { 00:23:10.121 "impl_name": "posix" 00:23:10.121 } 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "method": "sock_impl_set_options", 00:23:10.121 "params": { 00:23:10.121 "impl_name": "ssl", 00:23:10.121 "recv_buf_size": 4096, 00:23:10.121 "send_buf_size": 4096, 00:23:10.121 "enable_recv_pipe": true, 00:23:10.121 "enable_quickack": false, 00:23:10.121 "enable_placement_id": 0, 00:23:10.121 "enable_zerocopy_send_server": true, 00:23:10.121 "enable_zerocopy_send_client": false, 00:23:10.121 "zerocopy_threshold": 0, 00:23:10.121 "tls_version": 0, 00:23:10.121 "enable_ktls": false 00:23:10.121 } 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "method": "sock_impl_set_options", 00:23:10.121 "params": { 00:23:10.121 "impl_name": "posix", 00:23:10.121 "recv_buf_size": 2097152, 00:23:10.121 "send_buf_size": 2097152, 00:23:10.121 "enable_recv_pipe": true, 00:23:10.121 "enable_quickack": false, 00:23:10.121 "enable_placement_id": 0, 00:23:10.121 "enable_zerocopy_send_server": true, 00:23:10.121 "enable_zerocopy_send_client": false, 00:23:10.121 "zerocopy_threshold": 0, 00:23:10.121 "tls_version": 0, 00:23:10.121 "enable_ktls": false 00:23:10.121 } 00:23:10.121 } 00:23:10.121 ] 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "subsystem": "vmd", 00:23:10.121 "config": [] 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "subsystem": "accel", 00:23:10.121 "config": [ 00:23:10.121 { 00:23:10.121 "method": "accel_set_options", 00:23:10.121 "params": { 00:23:10.121 "small_cache_size": 128, 00:23:10.121 "large_cache_size": 16, 00:23:10.121 "task_count": 2048, 00:23:10.121 "sequence_count": 2048, 00:23:10.121 "buf_count": 2048 00:23:10.121 } 00:23:10.121 } 00:23:10.121 ] 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "subsystem": "bdev", 00:23:10.121 "config": [ 00:23:10.121 { 00:23:10.121 "method": "bdev_set_options", 00:23:10.121 "params": { 00:23:10.121 "bdev_io_pool_size": 65535, 00:23:10.121 "bdev_io_cache_size": 256, 00:23:10.121 "bdev_auto_examine": true, 00:23:10.121 "iobuf_small_cache_size": 128, 00:23:10.121 "iobuf_large_cache_size": 16 00:23:10.121 } 00:23:10.121 }, 00:23:10.121 { 00:23:10.121 "method": "bdev_raid_set_options", 00:23:10.122 "params": { 00:23:10.122 "process_window_size_kb": 1024 00:23:10.122 } 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "method": "bdev_iscsi_set_options", 00:23:10.122 "params": { 00:23:10.122 "timeout_sec": 30 00:23:10.122 } 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "method": "bdev_nvme_set_options", 00:23:10.122 "params": { 00:23:10.122 "action_on_timeout": "none", 00:23:10.122 "timeout_us": 0, 00:23:10.122 "timeout_admin_us": 0, 00:23:10.122 "keep_alive_timeout_ms": 10000, 00:23:10.122 "arbitration_burst": 0, 00:23:10.122 "low_priority_weight": 0, 00:23:10.122 "medium_priority_weight": 0, 00:23:10.122 "high_priority_weight": 0, 00:23:10.122 "nvme_adminq_poll_period_us": 10000, 00:23:10.122 "nvme_ioq_poll_period_us": 0, 00:23:10.122 "io_queue_requests": 512, 00:23:10.122 "delay_cmd_submit": true, 00:23:10.122 "transport_retry_count": 4, 00:23:10.122 "bdev_retry_count": 3, 00:23:10.122 "transport_ack_timeout": 0, 00:23:10.122 "ctrlr_loss_timeout_sec": 0, 00:23:10.122 "reconnect_delay_sec": 0, 00:23:10.122 "fast_io_fail_timeout_sec": 0, 00:23:10.122 "disable_auto_failback": false, 00:23:10.122 "generate_uuids": false, 00:23:10.122 "transport_tos": 0, 00:23:10.122 "nvme_error_stat": false, 00:23:10.122 "rdma_srq_size": 0, 00:23:10.122 "io_path_stat": false, 00:23:10.122 "allow_accel_sequence": false, 00:23:10.122 "rdma_max_cq_size": 0, 00:23:10.122 "rdma_cm_event_timeout_ms": 0, 00:23:10.122 "dhchap_digests": [ 00:23:10.122 "sha256", 00:23:10.122 "sha384", 00:23:10.122 "sha512" 00:23:10.122 ], 00:23:10.122 "dhchap_dhgroups": [ 00:23:10.122 "null", 00:23:10.122 "ffdhe2048", 00:23:10.122 "ffdhe3072", 00:23:10.122 "ffdhe4096", 00:23:10.122 "ffdhe6144", 00:23:10.122 "ffdhe8192" 00:23:10.122 ] 00:23:10.122 } 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "method": "bdev_nvme_attach_controller", 00:23:10.122 "params": { 00:23:10.122 "name": "TLSTEST", 00:23:10.122 "trtype": "TCP", 00:23:10.122 "adrfam": "IPv4", 00:23:10.122 "traddr": "10.0.0.2", 00:23:10.122 "trsvcid": "4420", 00:23:10.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.122 "prchk_reftag": false, 00:23:10.122 "prchk_guard": false, 00:23:10.122 "ctrlr_loss_timeout_sec": 0, 00:23:10.122 "reconnect_delay_sec": 0, 00:23:10.122 "fast_io_fail_timeout_sec": 0, 00:23:10.122 "psk": "/tmp/tmp.R4Eewn4ize", 00:23:10.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.122 "hdgst": false, 00:23:10.122 "ddgst": false 00:23:10.122 } 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "method": "bdev_nvme_set_hotplug", 00:23:10.122 "params": { 00:23:10.122 "period_us": 100000, 00:23:10.122 "enable": false 00:23:10.122 } 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "method": "bdev_wait_for_examine" 00:23:10.122 } 00:23:10.122 ] 00:23:10.122 }, 00:23:10.122 { 00:23:10.122 "subsystem": "nbd", 00:23:10.122 "config": [] 00:23:10.122 } 00:23:10.122 ] 00:23:10.122 }' 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3859649 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859649 ']' 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859649 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859649 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859649' 00:23:10.122 killing process with pid 3859649 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859649 00:23:10.122 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.122 00:23:10.122 Latency(us) 00:23:10.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.122 =================================================================================================================== 00:23:10.122 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.122 [2024-07-23 10:43:58.574597] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.122 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859649 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3859432 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859432 ']' 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859432 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859432 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859432' 00:23:10.395 killing process with pid 3859432 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859432 00:23:10.395 [2024-07-23 10:43:58.733047] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859432 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.395 10:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:10.395 "subsystems": [ 00:23:10.395 { 00:23:10.395 "subsystem": "keyring", 00:23:10.395 "config": [] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "iobuf", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "iobuf_set_options", 00:23:10.395 "params": { 00:23:10.395 "small_pool_count": 8192, 00:23:10.395 "large_pool_count": 1024, 00:23:10.395 "small_bufsize": 8192, 00:23:10.395 "large_bufsize": 135168 00:23:10.395 } 00:23:10.395 } 00:23:10.395 ] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "sock", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "sock_set_default_impl", 00:23:10.395 "params": { 00:23:10.395 "impl_name": "posix" 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "sock_impl_set_options", 00:23:10.395 "params": { 00:23:10.395 "impl_name": "ssl", 00:23:10.395 "recv_buf_size": 4096, 00:23:10.395 "send_buf_size": 4096, 00:23:10.395 "enable_recv_pipe": true, 00:23:10.395 "enable_quickack": false, 00:23:10.395 "enable_placement_id": 0, 00:23:10.395 "enable_zerocopy_send_server": true, 00:23:10.395 "enable_zerocopy_send_client": false, 00:23:10.395 "zerocopy_threshold": 0, 00:23:10.395 "tls_version": 0, 00:23:10.395 "enable_ktls": false 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "sock_impl_set_options", 00:23:10.395 "params": { 00:23:10.395 "impl_name": "posix", 00:23:10.395 "recv_buf_size": 2097152, 00:23:10.395 "send_buf_size": 2097152, 00:23:10.395 "enable_recv_pipe": true, 00:23:10.395 "enable_quickack": false, 00:23:10.395 "enable_placement_id": 0, 00:23:10.395 "enable_zerocopy_send_server": true, 00:23:10.395 "enable_zerocopy_send_client": false, 00:23:10.395 "zerocopy_threshold": 0, 00:23:10.395 "tls_version": 0, 00:23:10.395 "enable_ktls": false 00:23:10.395 } 00:23:10.395 } 00:23:10.395 ] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "vmd", 00:23:10.395 "config": [] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "accel", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "accel_set_options", 00:23:10.395 "params": { 00:23:10.395 "small_cache_size": 128, 00:23:10.395 "large_cache_size": 16, 00:23:10.395 "task_count": 2048, 00:23:10.395 "sequence_count": 2048, 00:23:10.395 "buf_count": 2048 00:23:10.395 } 00:23:10.395 } 00:23:10.395 ] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "bdev", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "bdev_set_options", 00:23:10.395 "params": { 00:23:10.395 "bdev_io_pool_size": 65535, 00:23:10.395 "bdev_io_cache_size": 256, 00:23:10.395 "bdev_auto_examine": true, 00:23:10.395 "iobuf_small_cache_size": 128, 00:23:10.395 "iobuf_large_cache_size": 16 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_raid_set_options", 00:23:10.395 "params": { 00:23:10.395 "process_window_size_kb": 1024 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_iscsi_set_options", 00:23:10.395 "params": { 00:23:10.395 "timeout_sec": 30 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_nvme_set_options", 00:23:10.395 "params": { 00:23:10.395 "action_on_timeout": "none", 00:23:10.395 "timeout_us": 0, 00:23:10.395 "timeout_admin_us": 0, 00:23:10.395 "keep_alive_timeout_ms": 10000, 00:23:10.395 "arbitration_burst": 0, 00:23:10.395 "low_priority_weight": 0, 00:23:10.395 "medium_priority_weight": 0, 00:23:10.395 "high_priority_weight": 0, 00:23:10.395 "nvme_adminq_poll_period_us": 10000, 00:23:10.395 "nvme_ioq_poll_period_us": 0, 00:23:10.395 "io_queue_requests": 0, 00:23:10.395 "delay_cmd_submit": true, 00:23:10.395 "transport_retry_count": 4, 00:23:10.395 "bdev_retry_count": 3, 00:23:10.395 "transport_ack_timeout": 0, 00:23:10.395 "ctrlr_loss_timeout_sec": 0, 00:23:10.395 "reconnect_delay_sec": 0, 00:23:10.395 "fast_io_fail_timeout_sec": 0, 00:23:10.395 "disable_auto_failback": false, 00:23:10.395 "generate_uuids": false, 00:23:10.395 "transport_tos": 0, 00:23:10.395 "nvme_error_stat": false, 00:23:10.395 "rdma_srq_size": 0, 00:23:10.395 "io_path_stat": false, 00:23:10.395 "allow_accel_sequence": false, 00:23:10.395 "rdma_max_cq_size": 0, 00:23:10.395 "rdma_cm_event_timeout_ms": 0, 00:23:10.395 "dhchap_digests": [ 00:23:10.395 "sha256", 00:23:10.395 "sha384", 00:23:10.395 "sha512" 00:23:10.395 ], 00:23:10.395 "dhchap_dhgroups": [ 00:23:10.395 "null", 00:23:10.395 "ffdhe2048", 00:23:10.395 "ffdhe3072", 00:23:10.395 "ffdhe4096", 00:23:10.395 "ffdhe6144", 00:23:10.395 "ffdhe8192" 00:23:10.395 ] 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_nvme_set_hotplug", 00:23:10.395 "params": { 00:23:10.395 "period_us": 100000, 00:23:10.395 "enable": false 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_malloc_create", 00:23:10.395 "params": { 00:23:10.395 "name": "malloc0", 00:23:10.395 "num_blocks": 8192, 00:23:10.395 "block_size": 4096, 00:23:10.395 "physical_block_size": 4096, 00:23:10.395 "uuid": "8b891acd-f18e-4b36-8715-6926b0c8ce6c", 00:23:10.395 "optimal_io_boundary": 0 00:23:10.395 } 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "method": "bdev_wait_for_examine" 00:23:10.395 } 00:23:10.395 ] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "nbd", 00:23:10.395 "config": [] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "scheduler", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "framework_set_scheduler", 00:23:10.395 "params": { 00:23:10.395 "name": "static" 00:23:10.395 } 00:23:10.395 } 00:23:10.395 ] 00:23:10.395 }, 00:23:10.395 { 00:23:10.395 "subsystem": "nvmf", 00:23:10.395 "config": [ 00:23:10.395 { 00:23:10.395 "method": "nvmf_set_config", 00:23:10.395 "params": { 00:23:10.396 "discovery_filter": "match_any", 00:23:10.396 "admin_cmd_passthru": { 00:23:10.396 "identify_ctrlr": false 00:23:10.396 } 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_set_max_subsystems", 00:23:10.396 "params": { 00:23:10.396 "max_subsystems": 1024 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_set_crdt", 00:23:10.396 "params": { 00:23:10.396 "crdt1": 0, 00:23:10.396 "crdt2": 0, 00:23:10.396 "crdt3": 0 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_create_transport", 00:23:10.396 "params": { 00:23:10.396 "trtype": "TCP", 00:23:10.396 "max_queue_depth": 128, 00:23:10.396 "max_io_qpairs_per_ctrlr": 127, 00:23:10.396 "in_capsule_data_size": 4096, 00:23:10.396 "max_io_size": 131072, 00:23:10.396 "io_unit_size": 131072, 00:23:10.396 "max_aq_depth": 128, 00:23:10.396 "num_shared_buffers": 511, 00:23:10.396 "buf_cache_size": 4294967295, 00:23:10.396 "dif_insert_or_strip": false, 00:23:10.396 "zcopy": false, 00:23:10.396 "c2h_success": false, 00:23:10.396 "sock_priority": 0, 00:23:10.396 "abort_timeout_sec": 1, 00:23:10.396 "ack_timeout": 0, 00:23:10.396 "data_wr_pool_size": 0 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_create_subsystem", 00:23:10.396 "params": { 00:23:10.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.396 "allow_any_host": false, 00:23:10.396 "serial_number": "SPDK00000000000001", 00:23:10.396 "model_number": "SPDK bdev Controller", 00:23:10.396 "max_namespaces": 10, 00:23:10.396 "min_cntlid": 1, 00:23:10.396 "max_cntlid": 65519, 00:23:10.396 "ana_reporting": false 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_subsystem_add_host", 00:23:10.396 "params": { 00:23:10.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.396 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.396 "psk": "/tmp/tmp.R4Eewn4ize" 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_subsystem_add_ns", 00:23:10.396 "params": { 00:23:10.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.396 "namespace": { 00:23:10.396 "nsid": 1, 00:23:10.396 "bdev_name": "malloc0", 00:23:10.396 "nguid": "8B891ACDF18E4B3687156926B0C8CE6C", 00:23:10.396 "uuid": "8b891acd-f18e-4b36-8715-6926b0c8ce6c", 00:23:10.396 "no_auto_visible": false 00:23:10.396 } 00:23:10.396 } 00:23:10.396 }, 00:23:10.396 { 00:23:10.396 "method": "nvmf_subsystem_add_listener", 00:23:10.396 "params": { 00:23:10.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.396 "listen_address": { 00:23:10.396 "trtype": "TCP", 00:23:10.396 "adrfam": "IPv4", 00:23:10.396 "traddr": "10.0.0.2", 00:23:10.396 "trsvcid": "4420" 00:23:10.396 }, 00:23:10.396 "secure_channel": true 00:23:10.396 } 00:23:10.396 } 00:23:10.396 ] 00:23:10.396 } 00:23:10.396 ] 00:23:10.396 }' 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3859774 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3859774 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859774 ']' 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.396 10:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.654 [2024-07-23 10:43:58.933327] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:10.654 [2024-07-23 10:43:58.933413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.654 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.654 [2024-07-23 10:43:58.985909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.654 [2024-07-23 10:43:59.055218] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.654 [2024-07-23 10:43:59.055272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.654 [2024-07-23 10:43:59.055285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.654 [2024-07-23 10:43:59.055295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.654 [2024-07-23 10:43:59.055305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.654 [2024-07-23 10:43:59.055378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.912 [2024-07-23 10:43:59.265773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.912 [2024-07-23 10:43:59.281735] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:10.912 [2024-07-23 10:43:59.297792] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.912 [2024-07-23 10:43:59.307682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3859891 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3859891 /var/tmp/bdevperf.sock 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3859891 ']' 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:11.478 10:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:11.478 "subsystems": [ 00:23:11.478 { 00:23:11.478 "subsystem": "keyring", 00:23:11.478 "config": [] 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "subsystem": "iobuf", 00:23:11.478 "config": [ 00:23:11.478 { 00:23:11.478 "method": "iobuf_set_options", 00:23:11.478 "params": { 00:23:11.478 "small_pool_count": 8192, 00:23:11.478 "large_pool_count": 1024, 00:23:11.478 "small_bufsize": 8192, 00:23:11.478 "large_bufsize": 135168 00:23:11.478 } 00:23:11.478 } 00:23:11.478 ] 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "subsystem": "sock", 00:23:11.478 "config": [ 00:23:11.478 { 00:23:11.478 "method": "sock_set_default_impl", 00:23:11.478 "params": { 00:23:11.478 "impl_name": "posix" 00:23:11.478 } 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "method": "sock_impl_set_options", 00:23:11.478 "params": { 00:23:11.478 "impl_name": "ssl", 00:23:11.478 "recv_buf_size": 4096, 00:23:11.478 "send_buf_size": 4096, 00:23:11.478 "enable_recv_pipe": true, 00:23:11.478 "enable_quickack": false, 00:23:11.478 "enable_placement_id": 0, 00:23:11.478 "enable_zerocopy_send_server": true, 00:23:11.478 "enable_zerocopy_send_client": false, 00:23:11.478 "zerocopy_threshold": 0, 00:23:11.478 "tls_version": 0, 00:23:11.478 "enable_ktls": false 00:23:11.478 } 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "method": "sock_impl_set_options", 00:23:11.478 "params": { 00:23:11.478 "impl_name": "posix", 00:23:11.478 "recv_buf_size": 2097152, 00:23:11.478 "send_buf_size": 2097152, 00:23:11.478 "enable_recv_pipe": true, 00:23:11.478 "enable_quickack": false, 00:23:11.478 "enable_placement_id": 0, 00:23:11.478 "enable_zerocopy_send_server": true, 00:23:11.478 "enable_zerocopy_send_client": false, 00:23:11.478 "zerocopy_threshold": 0, 00:23:11.478 "tls_version": 0, 00:23:11.478 "enable_ktls": false 00:23:11.478 } 00:23:11.478 } 00:23:11.478 ] 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "subsystem": "vmd", 00:23:11.478 "config": [] 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "subsystem": "accel", 00:23:11.478 "config": [ 00:23:11.478 { 00:23:11.478 "method": "accel_set_options", 00:23:11.478 "params": { 00:23:11.478 "small_cache_size": 128, 00:23:11.478 "large_cache_size": 16, 00:23:11.478 "task_count": 2048, 00:23:11.478 "sequence_count": 2048, 00:23:11.478 "buf_count": 2048 00:23:11.478 } 00:23:11.478 } 00:23:11.478 ] 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "subsystem": "bdev", 00:23:11.478 "config": [ 00:23:11.478 { 00:23:11.478 "method": "bdev_set_options", 00:23:11.478 "params": { 00:23:11.478 "bdev_io_pool_size": 65535, 00:23:11.478 "bdev_io_cache_size": 256, 00:23:11.478 "bdev_auto_examine": true, 00:23:11.478 "iobuf_small_cache_size": 128, 00:23:11.478 "iobuf_large_cache_size": 16 00:23:11.478 } 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "method": "bdev_raid_set_options", 00:23:11.478 "params": { 00:23:11.478 "process_window_size_kb": 1024 00:23:11.478 } 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "method": "bdev_iscsi_set_options", 00:23:11.478 "params": { 00:23:11.478 "timeout_sec": 30 00:23:11.478 } 00:23:11.478 }, 00:23:11.478 { 00:23:11.478 "method": "bdev_nvme_set_options", 00:23:11.478 "params": { 00:23:11.478 "action_on_timeout": "none", 00:23:11.478 "timeout_us": 0, 00:23:11.478 "timeout_admin_us": 0, 00:23:11.478 "keep_alive_timeout_ms": 10000, 00:23:11.478 "arbitration_burst": 0, 00:23:11.478 "low_priority_weight": 0, 00:23:11.478 "medium_priority_weight": 0, 00:23:11.478 "high_priority_weight": 0, 00:23:11.478 "nvme_adminq_poll_period_us": 10000, 00:23:11.478 "nvme_ioq_poll_period_us": 0, 00:23:11.478 "io_queue_requests": 512, 00:23:11.478 "delay_cmd_submit": true, 00:23:11.478 "transport_retry_count": 4, 00:23:11.478 "bdev_retry_count": 3, 00:23:11.478 "transport_ack_timeout": 0, 00:23:11.478 "ctrlr_loss_timeout_sec": 0, 00:23:11.478 "reconnect_delay_sec": 0, 00:23:11.478 "fast_io_fail_timeout_sec": 0, 00:23:11.478 "disable_auto_failback": false, 00:23:11.478 "generate_uuids": false, 00:23:11.478 "transport_tos": 0, 00:23:11.478 "nvme_error_stat": false, 00:23:11.478 "rdma_srq_size": 0, 00:23:11.478 "io_path_stat": false, 00:23:11.478 "allow_accel_sequence": false, 00:23:11.478 "rdma_max_cq_size": 0, 00:23:11.478 "rdma_cm_event_timeout_ms": 0, 00:23:11.478 "dhchap_digests": [ 00:23:11.478 "sha256", 00:23:11.478 "sha384", 00:23:11.478 "sha512" 00:23:11.478 ], 00:23:11.478 "dhchap_dhgroups": [ 00:23:11.478 "null", 00:23:11.478 "ffdhe2048", 00:23:11.478 "ffdhe3072", 00:23:11.478 "ffdhe4096", 00:23:11.478 "ffd 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.478 he6144", 00:23:11.479 "ffdhe8192" 00:23:11.479 ] 00:23:11.479 } 00:23:11.479 }, 00:23:11.479 { 00:23:11.479 "method": "bdev_nvme_attach_controller", 00:23:11.479 "params": { 00:23:11.479 "name": "TLSTEST", 00:23:11.479 "trtype": "TCP", 00:23:11.479 "adrfam": "IPv4", 00:23:11.479 "traddr": "10.0.0.2", 00:23:11.479 "trsvcid": "4420", 00:23:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.479 "prchk_reftag": false, 00:23:11.479 "prchk_guard": false, 00:23:11.479 "ctrlr_loss_timeout_sec": 0, 00:23:11.479 "reconnect_delay_sec": 0, 00:23:11.479 "fast_io_fail_timeout_sec": 0, 00:23:11.479 "psk": "/tmp/tmp.R4Eewn4ize", 00:23:11.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.479 "hdgst": false, 00:23:11.479 "ddgst": false 00:23:11.479 } 00:23:11.479 }, 00:23:11.479 { 00:23:11.479 "method": "bdev_nvme_set_hotplug", 00:23:11.479 "params": { 00:23:11.479 "period_us": 100000, 00:23:11.479 "enable": false 00:23:11.479 } 00:23:11.479 }, 00:23:11.479 { 00:23:11.479 "method": "bdev_wait_for_examine" 00:23:11.479 } 00:23:11.479 ] 00:23:11.479 }, 00:23:11.479 { 00:23:11.479 "subsystem": "nbd", 00:23:11.479 "config": [] 00:23:11.479 } 00:23:11.479 ] 00:23:11.479 }' 00:23:11.479 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.479 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.479 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.479 10:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.479 [2024-07-23 10:43:59.964080] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:11.479 [2024-07-23 10:43:59.964163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859891 ] 00:23:11.736 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.736 [2024-07-23 10:44:00.015396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.736 [2024-07-23 10:44:00.091416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.736 [2024-07-23 10:44:00.236148] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.736 [2024-07-23 10:44:00.236290] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.994 10:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.994 10:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:11.994 10:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:11.994 Running I/O for 10 seconds... 00:23:24.180 00:23:24.180 Latency(us) 00:23:24.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.180 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.180 Verification LBA range: start 0x0 length 0x2000 00:23:24.180 TLSTESTn1 : 10.03 3254.82 12.71 0.00 0.00 39242.08 8980.86 72235.24 00:23:24.180 =================================================================================================================== 00:23:24.180 Total : 3254.82 12.71 0.00 0.00 39242.08 8980.86 72235.24 00:23:24.180 0 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3859891 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859891 ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859891 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859891 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859891' 00:23:24.180 killing process with pid 3859891 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859891 00:23:24.180 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.180 00:23:24.180 Latency(us) 00:23:24.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.180 =================================================================================================================== 00:23:24.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.180 [2024-07-23 10:44:10.534630] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859891 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3859774 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3859774 ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3859774 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3859774 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3859774' 00:23:24.180 killing process with pid 3859774 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3859774 00:23:24.180 [2024-07-23 10:44:10.707267] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3859774 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3860890 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3860890 00:23:24.180 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3860890 ']' 00:23:24.181 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.181 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.181 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.181 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.181 10:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.181 [2024-07-23 10:44:10.914539] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:24.181 [2024-07-23 10:44:10.914639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.181 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.181 [2024-07-23 10:44:10.978070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.181 [2024-07-23 10:44:11.061225] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.181 [2024-07-23 10:44:11.061290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.181 [2024-07-23 10:44:11.061305] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.181 [2024-07-23 10:44:11.061318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.181 [2024-07-23 10:44:11.061330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.181 [2024-07-23 10:44:11.061366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.R4Eewn4ize 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.R4Eewn4ize 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.181 [2024-07-23 10:44:11.460356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.181 10:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.181 [2024-07-23 10:44:12.037868] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.181 [2024-07-23 10:44:12.038110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.181 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.181 malloc0 00:23:24.181 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.181 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R4Eewn4ize 00:23:24.439 [2024-07-23 10:44:12.930958] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3861112 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3861112 /var/tmp/bdevperf.sock 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3861112 ']' 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.698 10:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.698 [2024-07-23 10:44:12.995906] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:24.698 [2024-07-23 10:44:12.995998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861112 ] 00:23:24.698 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.698 [2024-07-23 10:44:13.055636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.698 [2024-07-23 10:44:13.143442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.975 10:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.976 10:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:24.976 10:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.R4Eewn4ize 00:23:25.237 10:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:25.237 [2024-07-23 10:44:13.704509] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.494 nvme0n1 00:23:25.494 10:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.494 Running I/O for 1 seconds... 00:23:26.864 00:23:26.864 Latency(us) 00:23:26.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.864 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:26.864 Verification LBA range: start 0x0 length 0x2000 00:23:26.864 nvme0n1 : 1.02 3288.84 12.85 0.00 0.00 38470.72 8301.23 35923.44 00:23:26.864 =================================================================================================================== 00:23:26.864 Total : 3288.84 12.85 0.00 0.00 38470.72 8301.23 35923.44 00:23:26.864 0 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3861112 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3861112 ']' 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3861112 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861112 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861112' 00:23:26.864 killing process with pid 3861112 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3861112 00:23:26.864 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.864 00:23:26.864 Latency(us) 00:23:26.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.864 =================================================================================================================== 00:23:26.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.864 10:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3861112 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3860890 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3860890 ']' 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3860890 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3860890 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3860890' 00:23:26.864 killing process with pid 3860890 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3860890 00:23:26.864 [2024-07-23 10:44:15.162478] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3860890 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3861325 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3861325 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3861325 ']' 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.864 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.132 [2024-07-23 10:44:15.394368] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:27.132 [2024-07-23 10:44:15.394465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.132 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.132 [2024-07-23 10:44:15.458741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.132 [2024-07-23 10:44:15.544817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.132 [2024-07-23 10:44:15.544883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.132 [2024-07-23 10:44:15.544900] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.132 [2024-07-23 10:44:15.544913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.132 [2024-07-23 10:44:15.544927] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.132 [2024-07-23 10:44:15.544958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.389 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.390 [2024-07-23 10:44:15.675253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.390 malloc0 00:23:27.390 [2024-07-23 10:44:15.705910] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.390 [2024-07-23 10:44:15.706170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3861351 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3861351 /var/tmp/bdevperf.sock 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3861351 ']' 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:27.390 10:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.390 [2024-07-23 10:44:15.780084] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:27.390 [2024-07-23 10:44:15.780182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861351 ] 00:23:27.390 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.390 [2024-07-23 10:44:15.840833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.647 [2024-07-23 10:44:15.928552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.647 10:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:27.647 10:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:27.647 10:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.R4Eewn4ize 00:23:27.905 10:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:28.163 [2024-07-23 10:44:16.596626] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.420 nvme0n1 00:23:28.420 10:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.420 Running I/O for 1 seconds... 00:23:29.354 00:23:29.354 Latency(us) 00:23:29.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.354 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:29.354 Verification LBA range: start 0x0 length 0x2000 00:23:29.354 nvme0n1 : 1.03 2680.64 10.47 0.00 0.00 47054.22 9514.86 39224.51 00:23:29.354 =================================================================================================================== 00:23:29.354 Total : 2680.64 10.47 0.00 0.00 47054.22 9514.86 39224.51 00:23:29.354 0 00:23:29.613 10:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:29.613 10:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.613 10:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.613 10:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.613 10:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:29.613 "subsystems": [ 00:23:29.613 { 00:23:29.613 "subsystem": "keyring", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "keyring_file_add_key", 00:23:29.613 "params": { 00:23:29.613 "name": "key0", 00:23:29.613 "path": "/tmp/tmp.R4Eewn4ize" 00:23:29.613 } 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "iobuf", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "iobuf_set_options", 00:23:29.613 "params": { 00:23:29.613 "small_pool_count": 8192, 00:23:29.613 "large_pool_count": 1024, 00:23:29.613 "small_bufsize": 8192, 00:23:29.613 "large_bufsize": 135168 00:23:29.613 } 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "sock", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "sock_set_default_impl", 00:23:29.613 "params": { 00:23:29.613 "impl_name": "posix" 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "sock_impl_set_options", 00:23:29.613 "params": { 00:23:29.613 "impl_name": "ssl", 00:23:29.613 "recv_buf_size": 4096, 00:23:29.613 "send_buf_size": 4096, 00:23:29.613 "enable_recv_pipe": true, 00:23:29.613 "enable_quickack": false, 00:23:29.613 "enable_placement_id": 0, 00:23:29.613 "enable_zerocopy_send_server": true, 00:23:29.613 "enable_zerocopy_send_client": false, 00:23:29.613 "zerocopy_threshold": 0, 00:23:29.613 "tls_version": 0, 00:23:29.613 "enable_ktls": false 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "sock_impl_set_options", 00:23:29.613 "params": { 00:23:29.613 "impl_name": "posix", 00:23:29.613 "recv_buf_size": 2097152, 00:23:29.613 "send_buf_size": 2097152, 00:23:29.613 "enable_recv_pipe": true, 00:23:29.613 "enable_quickack": false, 00:23:29.613 "enable_placement_id": 0, 00:23:29.613 "enable_zerocopy_send_server": true, 00:23:29.613 "enable_zerocopy_send_client": false, 00:23:29.613 "zerocopy_threshold": 0, 00:23:29.613 "tls_version": 0, 00:23:29.613 "enable_ktls": false 00:23:29.613 } 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "vmd", 00:23:29.613 "config": [] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "accel", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "accel_set_options", 00:23:29.613 "params": { 00:23:29.613 "small_cache_size": 128, 00:23:29.613 "large_cache_size": 16, 00:23:29.613 "task_count": 2048, 00:23:29.613 "sequence_count": 2048, 00:23:29.613 "buf_count": 2048 00:23:29.613 } 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "bdev", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "bdev_set_options", 00:23:29.613 "params": { 00:23:29.613 "bdev_io_pool_size": 65535, 00:23:29.613 "bdev_io_cache_size": 256, 00:23:29.613 "bdev_auto_examine": true, 00:23:29.613 "iobuf_small_cache_size": 128, 00:23:29.613 "iobuf_large_cache_size": 16 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_raid_set_options", 00:23:29.613 "params": { 00:23:29.613 "process_window_size_kb": 1024 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_iscsi_set_options", 00:23:29.613 "params": { 00:23:29.613 "timeout_sec": 30 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_nvme_set_options", 00:23:29.613 "params": { 00:23:29.613 "action_on_timeout": "none", 00:23:29.613 "timeout_us": 0, 00:23:29.613 "timeout_admin_us": 0, 00:23:29.613 "keep_alive_timeout_ms": 10000, 00:23:29.613 "arbitration_burst": 0, 00:23:29.613 "low_priority_weight": 0, 00:23:29.613 "medium_priority_weight": 0, 00:23:29.613 "high_priority_weight": 0, 00:23:29.613 "nvme_adminq_poll_period_us": 10000, 00:23:29.613 "nvme_ioq_poll_period_us": 0, 00:23:29.613 "io_queue_requests": 0, 00:23:29.613 "delay_cmd_submit": true, 00:23:29.613 "transport_retry_count": 4, 00:23:29.613 "bdev_retry_count": 3, 00:23:29.613 "transport_ack_timeout": 0, 00:23:29.613 "ctrlr_loss_timeout_sec": 0, 00:23:29.613 "reconnect_delay_sec": 0, 00:23:29.613 "fast_io_fail_timeout_sec": 0, 00:23:29.613 "disable_auto_failback": false, 00:23:29.613 "generate_uuids": false, 00:23:29.613 "transport_tos": 0, 00:23:29.613 "nvme_error_stat": false, 00:23:29.613 "rdma_srq_size": 0, 00:23:29.613 "io_path_stat": false, 00:23:29.613 "allow_accel_sequence": false, 00:23:29.613 "rdma_max_cq_size": 0, 00:23:29.613 "rdma_cm_event_timeout_ms": 0, 00:23:29.613 "dhchap_digests": [ 00:23:29.613 "sha256", 00:23:29.613 "sha384", 00:23:29.613 "sha512" 00:23:29.613 ], 00:23:29.613 "dhchap_dhgroups": [ 00:23:29.613 "null", 00:23:29.613 "ffdhe2048", 00:23:29.613 "ffdhe3072", 00:23:29.613 "ffdhe4096", 00:23:29.613 "ffdhe6144", 00:23:29.613 "ffdhe8192" 00:23:29.613 ] 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_nvme_set_hotplug", 00:23:29.613 "params": { 00:23:29.613 "period_us": 100000, 00:23:29.613 "enable": false 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_malloc_create", 00:23:29.613 "params": { 00:23:29.613 "name": "malloc0", 00:23:29.613 "num_blocks": 8192, 00:23:29.613 "block_size": 4096, 00:23:29.613 "physical_block_size": 4096, 00:23:29.613 "uuid": "a925f99c-ab66-4573-ba71-1f8b77b8c8a4", 00:23:29.613 "optimal_io_boundary": 0 00:23:29.613 } 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "method": "bdev_wait_for_examine" 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "nbd", 00:23:29.613 "config": [] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "scheduler", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "framework_set_scheduler", 00:23:29.613 "params": { 00:23:29.613 "name": "static" 00:23:29.613 } 00:23:29.613 } 00:23:29.613 ] 00:23:29.613 }, 00:23:29.613 { 00:23:29.613 "subsystem": "nvmf", 00:23:29.613 "config": [ 00:23:29.613 { 00:23:29.613 "method": "nvmf_set_config", 00:23:29.613 "params": { 00:23:29.613 "discovery_filter": "match_any", 00:23:29.613 "admin_cmd_passthru": { 00:23:29.613 "identify_ctrlr": false 00:23:29.614 } 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_set_max_subsystems", 00:23:29.614 "params": { 00:23:29.614 "max_subsystems": 1024 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_set_crdt", 00:23:29.614 "params": { 00:23:29.614 "crdt1": 0, 00:23:29.614 "crdt2": 0, 00:23:29.614 "crdt3": 0 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_create_transport", 00:23:29.614 "params": { 00:23:29.614 "trtype": "TCP", 00:23:29.614 "max_queue_depth": 128, 00:23:29.614 "max_io_qpairs_per_ctrlr": 127, 00:23:29.614 "in_capsule_data_size": 4096, 00:23:29.614 "max_io_size": 131072, 00:23:29.614 "io_unit_size": 131072, 00:23:29.614 "max_aq_depth": 128, 00:23:29.614 "num_shared_buffers": 511, 00:23:29.614 "buf_cache_size": 4294967295, 00:23:29.614 "dif_insert_or_strip": false, 00:23:29.614 "zcopy": false, 00:23:29.614 "c2h_success": false, 00:23:29.614 "sock_priority": 0, 00:23:29.614 "abort_timeout_sec": 1, 00:23:29.614 "ack_timeout": 0, 00:23:29.614 "data_wr_pool_size": 0 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_create_subsystem", 00:23:29.614 "params": { 00:23:29.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.614 "allow_any_host": false, 00:23:29.614 "serial_number": "00000000000000000000", 00:23:29.614 "model_number": "SPDK bdev Controller", 00:23:29.614 "max_namespaces": 32, 00:23:29.614 "min_cntlid": 1, 00:23:29.614 "max_cntlid": 65519, 00:23:29.614 "ana_reporting": false 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_subsystem_add_host", 00:23:29.614 "params": { 00:23:29.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.614 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.614 "psk": "key0" 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_subsystem_add_ns", 00:23:29.614 "params": { 00:23:29.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.614 "namespace": { 00:23:29.614 "nsid": 1, 00:23:29.614 "bdev_name": "malloc0", 00:23:29.614 "nguid": "A925F99CAB664573BA711F8B77B8C8A4", 00:23:29.614 "uuid": "a925f99c-ab66-4573-ba71-1f8b77b8c8a4", 00:23:29.614 "no_auto_visible": false 00:23:29.614 } 00:23:29.614 } 00:23:29.614 }, 00:23:29.614 { 00:23:29.614 "method": "nvmf_subsystem_add_listener", 00:23:29.614 "params": { 00:23:29.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.614 "listen_address": { 00:23:29.614 "trtype": "TCP", 00:23:29.614 "adrfam": "IPv4", 00:23:29.614 "traddr": "10.0.0.2", 00:23:29.614 "trsvcid": "4420" 00:23:29.614 }, 00:23:29.614 "secure_channel": true 00:23:29.614 } 00:23:29.614 } 00:23:29.614 ] 00:23:29.614 } 00:23:29.614 ] 00:23:29.614 }' 00:23:29.614 10:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:29.872 10:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:29.872 "subsystems": [ 00:23:29.872 { 00:23:29.872 "subsystem": "keyring", 00:23:29.872 "config": [ 00:23:29.872 { 00:23:29.872 "method": "keyring_file_add_key", 00:23:29.872 "params": { 00:23:29.872 "name": "key0", 00:23:29.872 "path": "/tmp/tmp.R4Eewn4ize" 00:23:29.872 } 00:23:29.872 } 00:23:29.872 ] 00:23:29.872 }, 00:23:29.872 { 00:23:29.872 "subsystem": "iobuf", 00:23:29.872 "config": [ 00:23:29.872 { 00:23:29.872 "method": "iobuf_set_options", 00:23:29.872 "params": { 00:23:29.872 "small_pool_count": 8192, 00:23:29.872 "large_pool_count": 1024, 00:23:29.872 "small_bufsize": 8192, 00:23:29.872 "large_bufsize": 135168 00:23:29.872 } 00:23:29.872 } 00:23:29.872 ] 00:23:29.872 }, 00:23:29.872 { 00:23:29.872 "subsystem": "sock", 00:23:29.872 "config": [ 00:23:29.872 { 00:23:29.872 "method": "sock_set_default_impl", 00:23:29.872 "params": { 00:23:29.872 "impl_name": "posix" 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "sock_impl_set_options", 00:23:29.873 "params": { 00:23:29.873 "impl_name": "ssl", 00:23:29.873 "recv_buf_size": 4096, 00:23:29.873 "send_buf_size": 4096, 00:23:29.873 "enable_recv_pipe": true, 00:23:29.873 "enable_quickack": false, 00:23:29.873 "enable_placement_id": 0, 00:23:29.873 "enable_zerocopy_send_server": true, 00:23:29.873 "enable_zerocopy_send_client": false, 00:23:29.873 "zerocopy_threshold": 0, 00:23:29.873 "tls_version": 0, 00:23:29.873 "enable_ktls": false 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "sock_impl_set_options", 00:23:29.873 "params": { 00:23:29.873 "impl_name": "posix", 00:23:29.873 "recv_buf_size": 2097152, 00:23:29.873 "send_buf_size": 2097152, 00:23:29.873 "enable_recv_pipe": true, 00:23:29.873 "enable_quickack": false, 00:23:29.873 "enable_placement_id": 0, 00:23:29.873 "enable_zerocopy_send_server": true, 00:23:29.873 "enable_zerocopy_send_client": false, 00:23:29.873 "zerocopy_threshold": 0, 00:23:29.873 "tls_version": 0, 00:23:29.873 "enable_ktls": false 00:23:29.873 } 00:23:29.873 } 00:23:29.873 ] 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "subsystem": "vmd", 00:23:29.873 "config": [] 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "subsystem": "accel", 00:23:29.873 "config": [ 00:23:29.873 { 00:23:29.873 "method": "accel_set_options", 00:23:29.873 "params": { 00:23:29.873 "small_cache_size": 128, 00:23:29.873 "large_cache_size": 16, 00:23:29.873 "task_count": 2048, 00:23:29.873 "sequence_count": 2048, 00:23:29.873 "buf_count": 2048 00:23:29.873 } 00:23:29.873 } 00:23:29.873 ] 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "subsystem": "bdev", 00:23:29.873 "config": [ 00:23:29.873 { 00:23:29.873 "method": "bdev_set_options", 00:23:29.873 "params": { 00:23:29.873 "bdev_io_pool_size": 65535, 00:23:29.873 "bdev_io_cache_size": 256, 00:23:29.873 "bdev_auto_examine": true, 00:23:29.873 "iobuf_small_cache_size": 128, 00:23:29.873 "iobuf_large_cache_size": 16 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_raid_set_options", 00:23:29.873 "params": { 00:23:29.873 "process_window_size_kb": 1024 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_iscsi_set_options", 00:23:29.873 "params": { 00:23:29.873 "timeout_sec": 30 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_nvme_set_options", 00:23:29.873 "params": { 00:23:29.873 "action_on_timeout": "none", 00:23:29.873 "timeout_us": 0, 00:23:29.873 "timeout_admin_us": 0, 00:23:29.873 "keep_alive_timeout_ms": 10000, 00:23:29.873 "arbitration_burst": 0, 00:23:29.873 "low_priority_weight": 0, 00:23:29.873 "medium_priority_weight": 0, 00:23:29.873 "high_priority_weight": 0, 00:23:29.873 "nvme_adminq_poll_period_us": 10000, 00:23:29.873 "nvme_ioq_poll_period_us": 0, 00:23:29.873 "io_queue_requests": 512, 00:23:29.873 "delay_cmd_submit": true, 00:23:29.873 "transport_retry_count": 4, 00:23:29.873 "bdev_retry_count": 3, 00:23:29.873 "transport_ack_timeout": 0, 00:23:29.873 "ctrlr_loss_timeout_sec": 0, 00:23:29.873 "reconnect_delay_sec": 0, 00:23:29.873 "fast_io_fail_timeout_sec": 0, 00:23:29.873 "disable_auto_failback": false, 00:23:29.873 "generate_uuids": false, 00:23:29.873 "transport_tos": 0, 00:23:29.873 "nvme_error_stat": false, 00:23:29.873 "rdma_srq_size": 0, 00:23:29.873 "io_path_stat": false, 00:23:29.873 "allow_accel_sequence": false, 00:23:29.873 "rdma_max_cq_size": 0, 00:23:29.873 "rdma_cm_event_timeout_ms": 0, 00:23:29.873 "dhchap_digests": [ 00:23:29.873 "sha256", 00:23:29.873 "sha384", 00:23:29.873 "sha512" 00:23:29.873 ], 00:23:29.873 "dhchap_dhgroups": [ 00:23:29.873 "null", 00:23:29.873 "ffdhe2048", 00:23:29.873 "ffdhe3072", 00:23:29.873 "ffdhe4096", 00:23:29.873 "ffdhe6144", 00:23:29.873 "ffdhe8192" 00:23:29.873 ] 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_nvme_attach_controller", 00:23:29.873 "params": { 00:23:29.873 "name": "nvme0", 00:23:29.873 "trtype": "TCP", 00:23:29.873 "adrfam": "IPv4", 00:23:29.873 "traddr": "10.0.0.2", 00:23:29.873 "trsvcid": "4420", 00:23:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.873 "prchk_reftag": false, 00:23:29.873 "prchk_guard": false, 00:23:29.873 "ctrlr_loss_timeout_sec": 0, 00:23:29.873 "reconnect_delay_sec": 0, 00:23:29.873 "fast_io_fail_timeout_sec": 0, 00:23:29.873 "psk": "key0", 00:23:29.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.873 "hdgst": false, 00:23:29.873 "ddgst": false 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_nvme_set_hotplug", 00:23:29.873 "params": { 00:23:29.873 "period_us": 100000, 00:23:29.873 "enable": false 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_enable_histogram", 00:23:29.873 "params": { 00:23:29.873 "name": "nvme0n1", 00:23:29.873 "enable": true 00:23:29.873 } 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "method": "bdev_wait_for_examine" 00:23:29.873 } 00:23:29.873 ] 00:23:29.873 }, 00:23:29.873 { 00:23:29.873 "subsystem": "nbd", 00:23:29.873 "config": [] 00:23:29.873 } 00:23:29.873 ] 00:23:29.873 }' 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3861351 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3861351 ']' 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3861351 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861351 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861351' 00:23:29.873 killing process with pid 3861351 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3861351 00:23:29.873 Received shutdown signal, test time was about 1.000000 seconds 00:23:29.873 00:23:29.873 Latency(us) 00:23:29.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.873 =================================================================================================================== 00:23:29.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.873 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3861351 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3861325 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3861325 ']' 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3861325 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861325 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861325' 00:23:30.132 killing process with pid 3861325 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3861325 00:23:30.132 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3861325 00:23:30.391 10:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:30.391 10:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.391 10:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:30.391 "subsystems": [ 00:23:30.391 { 00:23:30.391 "subsystem": "keyring", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "keyring_file_add_key", 00:23:30.391 "params": { 00:23:30.391 "name": "key0", 00:23:30.391 "path": "/tmp/tmp.R4Eewn4ize" 00:23:30.391 } 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "iobuf", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "iobuf_set_options", 00:23:30.391 "params": { 00:23:30.391 "small_pool_count": 8192, 00:23:30.391 "large_pool_count": 1024, 00:23:30.391 "small_bufsize": 8192, 00:23:30.391 "large_bufsize": 135168 00:23:30.391 } 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "sock", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "sock_set_default_impl", 00:23:30.391 "params": { 00:23:30.391 "impl_name": "posix" 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "sock_impl_set_options", 00:23:30.391 "params": { 00:23:30.391 "impl_name": "ssl", 00:23:30.391 "recv_buf_size": 4096, 00:23:30.391 "send_buf_size": 4096, 00:23:30.391 "enable_recv_pipe": true, 00:23:30.391 "enable_quickack": false, 00:23:30.391 "enable_placement_id": 0, 00:23:30.391 "enable_zerocopy_send_server": true, 00:23:30.391 "enable_zerocopy_send_client": false, 00:23:30.391 "zerocopy_threshold": 0, 00:23:30.391 "tls_version": 0, 00:23:30.391 "enable_ktls": false 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "sock_impl_set_options", 00:23:30.391 "params": { 00:23:30.391 "impl_name": "posix", 00:23:30.391 "recv_buf_size": 2097152, 00:23:30.391 "send_buf_size": 2097152, 00:23:30.391 "enable_recv_pipe": true, 00:23:30.391 "enable_quickack": false, 00:23:30.391 "enable_placement_id": 0, 00:23:30.391 "enable_zerocopy_send_server": true, 00:23:30.391 "enable_zerocopy_send_client": false, 00:23:30.391 "zerocopy_threshold": 0, 00:23:30.391 "tls_version": 0, 00:23:30.391 "enable_ktls": false 00:23:30.391 } 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "vmd", 00:23:30.391 "config": [] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "accel", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "accel_set_options", 00:23:30.391 "params": { 00:23:30.391 "small_cache_size": 128, 00:23:30.391 "large_cache_size": 16, 00:23:30.391 "task_count": 2048, 00:23:30.391 "sequence_count": 2048, 00:23:30.391 "buf_count": 2048 00:23:30.391 } 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "bdev", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "bdev_set_options", 00:23:30.391 "params": { 00:23:30.391 "bdev_io_pool_size": 65535, 00:23:30.391 "bdev_io_cache_size": 256, 00:23:30.391 "bdev_auto_examine": true, 00:23:30.391 "iobuf_small_cache_size": 128, 00:23:30.391 "iobuf_large_cache_size": 16 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_raid_set_options", 00:23:30.391 "params": { 00:23:30.391 "process_window_size_kb": 1024 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_iscsi_set_options", 00:23:30.391 "params": { 00:23:30.391 "timeout_sec": 30 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_nvme_set_options", 00:23:30.391 "params": { 00:23:30.391 "action_on_timeout": "none", 00:23:30.391 "timeout_us": 0, 00:23:30.391 "timeout_admin_us": 0, 00:23:30.391 "keep_alive_timeout_ms": 10000, 00:23:30.391 "arbitration_burst": 0, 00:23:30.391 "low_priority_weight": 0, 00:23:30.391 "medium_priority_weight": 0, 00:23:30.391 "high_priority_weight": 0, 00:23:30.391 "nvme_adminq_poll_period_us": 10000, 00:23:30.391 "nvme_ioq_poll_period_us": 0, 00:23:30.391 "io_queue_requests": 0, 00:23:30.391 "delay_cmd_submit": true, 00:23:30.391 "transport_retry_count": 4, 00:23:30.391 "bdev_retry_count": 3, 00:23:30.391 "transport_ack_timeout": 0, 00:23:30.391 "ctrlr_loss_timeout_sec": 0, 00:23:30.391 "reconnect_delay_sec": 0, 00:23:30.391 "fast_io_fail_timeout_sec": 0, 00:23:30.391 "disable_auto_failback": false, 00:23:30.391 "generate_uuids": false, 00:23:30.391 "transport_tos": 0, 00:23:30.391 "nvme_error_stat": false, 00:23:30.391 "rdma_srq_size": 0, 00:23:30.391 "io_path_stat": false, 00:23:30.391 "allow_accel_sequence": false, 00:23:30.391 "rdma_max_cq_size": 0, 00:23:30.391 "rdma_cm_event_timeout_ms": 0, 00:23:30.391 "dhchap_digests": [ 00:23:30.391 "sha256", 00:23:30.391 "sha384", 00:23:30.391 "sha512" 00:23:30.391 ], 00:23:30.391 "dhchap_dhgroups": [ 00:23:30.391 "null", 00:23:30.391 "ffdhe2048", 00:23:30.391 "ffdhe3072", 00:23:30.391 "ffdhe4096", 00:23:30.391 "ffdhe6144", 00:23:30.391 "ffdhe8192" 00:23:30.391 ] 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_nvme_set_hotplug", 00:23:30.391 "params": { 00:23:30.391 "period_us": 100000, 00:23:30.391 "enable": false 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_malloc_create", 00:23:30.391 "params": { 00:23:30.391 "name": "malloc0", 00:23:30.391 "num_blocks": 8192, 00:23:30.391 "block_size": 4096, 00:23:30.391 "physical_block_size": 4096, 00:23:30.391 "uuid": "a925f99c-ab66-4573-ba71-1f8b77b8c8a4", 00:23:30.391 "optimal_io_boundary": 0 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "bdev_wait_for_examine" 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "nbd", 00:23:30.391 "config": [] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "scheduler", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "framework_set_scheduler", 00:23:30.391 "params": { 00:23:30.391 "name": "static" 00:23:30.391 } 00:23:30.391 } 00:23:30.391 ] 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "subsystem": "nvmf", 00:23:30.391 "config": [ 00:23:30.391 { 00:23:30.391 "method": "nvmf_set_config", 00:23:30.391 "params": { 00:23:30.391 "discovery_filter": "match_any", 00:23:30.391 "admin_cmd_passthru": { 00:23:30.391 "identify_ctrlr": false 00:23:30.391 } 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "nvmf_set_max_subsystems", 00:23:30.391 "params": { 00:23:30.391 "max_subsystems": 1024 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "nvmf_set_crdt", 00:23:30.391 "params": { 00:23:30.391 "crdt1": 0, 00:23:30.391 "crdt2": 0, 00:23:30.391 "crdt3": 0 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "nvmf_create_transport", 00:23:30.391 "params": { 00:23:30.391 "trtype": "TCP", 00:23:30.391 "max_queue_depth": 128, 00:23:30.391 "max_io_qpairs_per_ctrlr": 127, 00:23:30.391 "in_capsule_data_size": 4096, 00:23:30.391 "max_io_size": 131072, 00:23:30.391 "io_unit_size": 131072, 00:23:30.391 "max_aq_depth": 128, 00:23:30.391 "num_shared_buffers": 511, 00:23:30.391 "buf_cache_size": 4294967295, 00:23:30.391 "dif_insert_or_strip": false, 00:23:30.391 "zcopy": false, 00:23:30.391 "c2h_success": false, 00:23:30.391 "sock_priority": 0, 00:23:30.391 "abort_timeout_sec": 1, 00:23:30.391 "ack_timeout": 0, 00:23:30.391 "data_wr_pool_size": 0 00:23:30.391 } 00:23:30.391 }, 00:23:30.391 { 00:23:30.391 "method": "nvmf_create_subsystem", 00:23:30.391 "params": { 00:23:30.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.391 "allow_any_host": false, 00:23:30.391 "serial_number": "00000000000000000000", 00:23:30.392 "model_number": "SPDK bdev Controller", 00:23:30.392 "max_namespaces": 32, 00:23:30.392 "min_cntlid": 1, 00:23:30.392 "max_cntlid": 65519, 00:23:30.392 "ana_reporting": false 00:23:30.392 } 00:23:30.392 }, 00:23:30.392 { 00:23:30.392 "method": "nvmf_subsystem_add_host", 00:23:30.392 "params": { 00:23:30.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.392 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.392 "psk": "key0" 00:23:30.392 } 00:23:30.392 }, 00:23:30.392 { 00:23:30.392 "method": "nvmf_subsystem_add_ns", 00:23:30.392 "params": { 00:23:30.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.392 "namespace": { 00:23:30.392 "nsid": 1, 00:23:30.392 "bdev_name": "malloc0", 00:23:30.392 "nguid": "A925F99CAB664573BA711F8B77B8C8A4", 00:23:30.392 "uuid": "a925f99c-ab66-4573-ba71-1f8b77b8c8a4", 00:23:30.392 "no_auto_visible": false 00:23:30.392 } 00:23:30.392 } 00:23:30.392 }, 00:23:30.392 { 00:23:30.392 "method": "nvmf_subsystem_add_listener", 00:23:30.392 "params": { 00:23:30.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.392 "listen_address": { 00:23:30.392 "trtype": "TCP", 00:23:30.392 "adrfam": "IPv4", 00:23:30.392 "traddr": "10.0.0.2", 00:23:30.392 "trsvcid": "4420" 00:23:30.392 }, 00:23:30.392 "secure_channel": true 00:23:30.392 } 00:23:30.392 } 00:23:30.392 ] 00:23:30.392 } 00:23:30.392 ] 00:23:30.392 }' 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3861666 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3861666 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3861666 ']' 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.392 10:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.392 [2024-07-23 10:44:18.766150] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:30.392 [2024-07-23 10:44:18.766245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.392 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.392 [2024-07-23 10:44:18.831496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.650 [2024-07-23 10:44:18.920952] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.650 [2024-07-23 10:44:18.921022] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.650 [2024-07-23 10:44:18.921038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.650 [2024-07-23 10:44:18.921051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.650 [2024-07-23 10:44:18.921063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.650 [2024-07-23 10:44:18.921159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.650 [2024-07-23 10:44:19.144178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.908 [2024-07-23 10:44:19.176194] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.908 [2024-07-23 10:44:19.185694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.476 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3861788 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3861788 /var/tmp/bdevperf.sock 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3861788 ']' 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:31.477 "subsystems": [ 00:23:31.477 { 00:23:31.477 "subsystem": "keyring", 00:23:31.477 "config": [ 00:23:31.477 { 00:23:31.477 "method": "keyring_file_add_key", 00:23:31.477 "params": { 00:23:31.477 "name": "key0", 00:23:31.477 "path": "/tmp/tmp.R4Eewn4ize" 00:23:31.477 } 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "iobuf", 00:23:31.477 "config": [ 00:23:31.477 { 00:23:31.477 "method": "iobuf_set_options", 00:23:31.477 "params": { 00:23:31.477 "small_pool_count": 8192, 00:23:31.477 "large_pool_count": 1024, 00:23:31.477 "small_bufsize": 8192, 00:23:31.477 "large_bufsize": 135168 00:23:31.477 } 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "sock", 00:23:31.477 "config": [ 00:23:31.477 { 00:23:31.477 "method": "sock_set_default_impl", 00:23:31.477 "params": { 00:23:31.477 "impl_name": "posix" 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "sock_impl_set_options", 00:23:31.477 "params": { 00:23:31.477 "impl_name": "ssl", 00:23:31.477 "recv_buf_size": 4096, 00:23:31.477 "send_buf_size": 4096, 00:23:31.477 "enable_recv_pipe": true, 00:23:31.477 "enable_quickack": false, 00:23:31.477 "enable_placement_id": 0, 00:23:31.477 "enable_zerocopy_send_server": true, 00:23:31.477 "enable_zerocopy_send_client": false, 00:23:31.477 "zerocopy_threshold": 0, 00:23:31.477 "tls_version": 0, 00:23:31.477 "enable_ktls": false 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "sock_impl_set_options", 00:23:31.477 "params": { 00:23:31.477 "impl_name": "posix", 00:23:31.477 "recv_buf_size": 2097152, 00:23:31.477 "send_buf_size": 2097152, 00:23:31.477 "enable_recv_pipe": true, 00:23:31.477 "enable_quickack": false, 00:23:31.477 "enable_placement_id": 0, 00:23:31.477 "enable_zerocopy_send_server": true, 00:23:31.477 "enable_zerocopy_send_client": false, 00:23:31.477 "zerocopy_threshold": 0, 00:23:31.477 "tls_version": 0, 00:23:31.477 "enable_ktls": false 00:23:31.477 } 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "vmd", 00:23:31.477 "config": [] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "accel", 00:23:31.477 "config": [ 00:23:31.477 { 00:23:31.477 "method": "accel_set_options", 00:23:31.477 "params": { 00:23:31.477 "small_cache_size": 128, 00:23:31.477 "large_cache_size": 16, 00:23:31.477 "task_count": 2048, 00:23:31.477 "sequence_count": 2048, 00:23:31.477 "buf_count": 2048 00:23:31.477 } 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "bdev", 00:23:31.477 "config": [ 00:23:31.477 { 00:23:31.477 "method": "bdev_set_options", 00:23:31.477 "params": { 00:23:31.477 "bdev_io_pool_size": 65535, 00:23:31.477 "bdev_io_cache_size": 256, 00:23:31.477 "bdev_auto_examine": true, 00:23:31.477 "iobuf_small_cache_size": 128, 00:23:31.477 "iobuf_large_cache_size": 16 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_raid_set_options", 00:23:31.477 "params": { 00:23:31.477 "process_window_size_kb": 1024 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_iscsi_set_options", 00:23:31.477 "params": { 00:23:31.477 "timeout_sec": 30 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_nvme_set_options", 00:23:31.477 "params": { 00:23:31.477 "action_on_timeout": "none", 00:23:31.477 "timeout_us": 0, 00:23:31.477 "timeout_admin_us": 0, 00:23:31.477 "keep_alive_timeout_ms": 10000, 00:23:31.477 "arbitration_burst": 0, 00:23:31.477 "low_priority_weight": 0, 00:23:31.477 "medium_priority_weight": 0, 00:23:31.477 "high_priority_weight": 0, 00:23:31.477 "nvme_adminq_poll_period_us": 10000, 00:23:31.477 "nvme_ioq_poll_period_us": 0, 00:23:31.477 "io_queue_requests": 512, 00:23:31.477 "delay_cmd_submit": true, 00:23:31.477 "transport_retry_count": 4, 00:23:31.477 "bdev_retry_count": 3, 00:23:31.477 "transport_ack_timeout": 0, 00:23:31.477 "ctrlr_loss_timeout_sec": 0, 00:23:31.477 "reconnect_delay_sec": 0, 00:23:31.477 "fast_io_fail_timeout_sec": 0, 00:23:31.477 "disable_auto_failback": false, 00:23:31.477 "generate_uuids": false, 00:23:31.477 "transport_tos": 0, 00:23:31.477 "nvme_error_stat": false, 00:23:31.477 "rdma_srq_size": 0, 00:23:31.477 "io_path_stat": false, 00:23:31.477 "allow_accel_sequence": false, 00:23:31.477 "rdma_max_cq_size": 0, 00:23:31.477 "rdma_cm_event_timeout_ms": 0, 00:23:31.477 "dhchap_digests": [ 00:23:31.477 "sha256", 00:23:31.477 "sha384", 00:23:31.477 "sha512" 00:23:31.477 ], 00:23:31.477 "dhchap_dhgroups": [ 00:23:31.477 "null", 00:23:31.477 "ffdhe2048", 00:23:31.477 "ffdhe3072", 00:23:31.477 "ffdhe4096", 00:23:31.477 "ffdhe6144", 00:23:31.477 "ffdhe8192" 00:23:31.477 ] 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_nvme_attach_controller", 00:23:31.477 "params": { 00:23:31.477 "name": "nvme0", 00:23:31.477 "trtype": "TCP", 00:23:31.477 "adrfam": "IPv4", 00:23:31.477 "traddr": "10.0.0.2", 00:23:31.477 "trsvcid": "4420", 00:23:31.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.477 "prchk_reftag": false, 00:23:31.477 "prchk_guard": false, 00:23:31.477 "ctrlr_loss_timeout_sec": 0, 00:23:31.477 "reconnect_delay_sec": 0, 00:23:31.477 "fast_io_fail_timeout_sec": 0, 00:23:31.477 "psk": "key0", 00:23:31.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.477 "hdgst": false, 00:23:31.477 "ddgst": false 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_nvme_set_hotplug", 00:23:31.477 "params": { 00:23:31.477 "period_us": 100000, 00:23:31.477 "enable": false 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_enable_histogram", 00:23:31.477 "params": { 00:23:31.477 "name": "nvme0n1", 00:23:31.477 "enable": true 00:23:31.477 } 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "method": "bdev_wait_for_examine" 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }, 00:23:31.477 { 00:23:31.477 "subsystem": "nbd", 00:23:31.477 "config": [] 00:23:31.477 } 00:23:31.477 ] 00:23:31.477 }' 00:23:31.477 10:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.477 [2024-07-23 10:44:19.864142] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:31.478 [2024-07-23 10:44:19.864239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861788 ] 00:23:31.478 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.478 [2024-07-23 10:44:19.925458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.755 [2024-07-23 10:44:20.018556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.755 [2024-07-23 10:44:20.185280] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.018 10:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.018 10:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.018 10:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.018 10:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:32.276 10:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.276 10:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.276 Running I/O for 1 seconds... 00:23:33.651 00:23:33.651 Latency(us) 00:23:33.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.651 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:33.651 Verification LBA range: start 0x0 length 0x2000 00:23:33.651 nvme0n1 : 1.02 2878.37 11.24 0.00 0.00 43974.35 10291.58 41166.32 00:23:33.651 =================================================================================================================== 00:23:33.651 Total : 2878.37 11.24 0.00 0.00 43974.35 10291.58 41166.32 00:23:33.651 0 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:33.651 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:33.652 nvmf_trace.0 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3861788 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3861788 ']' 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3861788 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861788 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861788' 00:23:33.652 killing process with pid 3861788 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3861788 00:23:33.652 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.652 00:23:33.652 Latency(us) 00:23:33.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.652 =================================================================================================================== 00:23:33.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.652 10:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3861788 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.652 rmmod nvme_tcp 00:23:33.652 rmmod nvme_fabrics 00:23:33.652 rmmod nvme_keyring 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3861666 ']' 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3861666 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3861666 ']' 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3861666 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3861666 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3861666' 00:23:33.652 killing process with pid 3861666 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3861666 00:23:33.652 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3861666 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.911 10:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.450 10:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.450 10:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9fWfHQVBsq /tmp/tmp.Pt46tBpyFY /tmp/tmp.R4Eewn4ize 00:23:36.450 00:23:36.450 real 1m16.307s 00:23:36.450 user 1m59.299s 00:23:36.450 sys 0m25.873s 00:23:36.450 10:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:36.450 10:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.450 ************************************ 00:23:36.450 END TEST nvmf_tls 00:23:36.450 ************************************ 00:23:36.450 10:44:24 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:36.450 10:44:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:36.450 10:44:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:36.450 10:44:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.450 ************************************ 00:23:36.450 START TEST nvmf_fips 00:23:36.450 ************************************ 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:36.450 * Looking for test storage... 00:23:36.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.450 10:44:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:36.451 Error setting digest 00:23:36.451 00E2F8B57E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:36.451 00E2F8B57E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.451 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.452 10:44:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.828 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:37.829 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:37.829 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:37.829 Found net devices under 0000:08:00.0: cvl_0_0 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:37.829 Found net devices under 0000:08:00.1: cvl_0_1 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.829 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.087 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.087 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.087 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:23:38.087 00:23:38.087 --- 10.0.0.2 ping statistics --- 00:23:38.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.087 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:38.087 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:23:38.087 00:23:38.087 --- 10.0.0.1 ping statistics --- 00:23:38.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.088 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3863515 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3863515 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3863515 ']' 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.088 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.088 [2024-07-23 10:44:26.489900] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:38.088 [2024-07-23 10:44:26.489996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.088 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.088 [2024-07-23 10:44:26.555672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.346 [2024-07-23 10:44:26.645267] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.346 [2024-07-23 10:44:26.645337] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.346 [2024-07-23 10:44:26.645354] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.346 [2024-07-23 10:44:26.645367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.346 [2024-07-23 10:44:26.645378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.346 [2024-07-23 10:44:26.645411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:38.346 10:44:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.604 [2024-07-23 10:44:27.050461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.604 [2024-07-23 10:44:27.066424] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.604 [2024-07-23 10:44:27.066648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.604 [2024-07-23 10:44:27.096863] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.604 malloc0 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3863631 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3863631 /var/tmp/bdevperf.sock 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3863631 ']' 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.862 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.862 [2024-07-23 10:44:27.198887] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:38.862 [2024-07-23 10:44:27.198975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863631 ] 00:23:38.862 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.862 [2024-07-23 10:44:27.259438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.862 [2024-07-23 10:44:27.347278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.120 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.120 10:44:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:39.120 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.378 [2024-07-23 10:44:27.702408] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.378 [2024-07-23 10:44:27.702538] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.378 TLSTESTn1 00:23:39.378 10:44:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.636 Running I/O for 10 seconds... 00:23:49.618 00:23:49.618 Latency(us) 00:23:49.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.618 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.618 Verification LBA range: start 0x0 length 0x2000 00:23:49.618 TLSTESTn1 : 10.02 3323.78 12.98 0.00 0.00 38437.88 7378.87 33204.91 00:23:49.618 =================================================================================================================== 00:23:49.618 Total : 3323.78 12.98 0.00 0.00 38437.88 7378.87 33204.91 00:23:49.618 0 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:49.618 10:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:49.618 nvmf_trace.0 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3863631 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3863631 ']' 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3863631 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3863631 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3863631' 00:23:49.618 killing process with pid 3863631 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3863631 00:23:49.618 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.618 00:23:49.618 Latency(us) 00:23:49.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.618 =================================================================================================================== 00:23:49.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.618 [2024-07-23 10:44:38.056388] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:49.618 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3863631 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.877 rmmod nvme_tcp 00:23:49.877 rmmod nvme_fabrics 00:23:49.877 rmmod nvme_keyring 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3863515 ']' 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3863515 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3863515 ']' 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3863515 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3863515 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3863515' 00:23:49.877 killing process with pid 3863515 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3863515 00:23:49.877 [2024-07-23 10:44:38.312615] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.877 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3863515 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.137 10:44:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.054 10:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.054 10:44:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.054 00:23:52.054 real 0m16.144s 00:23:52.054 user 0m21.880s 00:23:52.054 sys 0m4.663s 00:23:52.054 10:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:52.054 10:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.054 ************************************ 00:23:52.054 END TEST nvmf_fips 00:23:52.054 ************************************ 00:23:52.312 10:44:40 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:52.312 10:44:40 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:52.312 10:44:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:52.312 10:44:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:52.312 10:44:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.312 ************************************ 00:23:52.312 START TEST nvmf_fuzz 00:23:52.312 ************************************ 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:52.312 * Looking for test storage... 00:23:52.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.312 10:44:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.216 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:54.217 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:54.217 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:54.217 Found net devices under 0000:08:00.0: cvl_0_0 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:54.217 Found net devices under 0000:08:00.1: cvl_0_1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:23:54.217 00:23:54.217 --- 10.0.0.2 ping statistics --- 00:23:54.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.217 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:23:54.217 00:23:54.217 --- 10.0.0.1 ping statistics --- 00:23:54.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.217 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3866037 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3866037 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3866037 ']' 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.217 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 Malloc0 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:54.476 10:44:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:26.655 Fuzzing completed. Shutting down the fuzz application 00:24:26.655 00:24:26.655 Dumping successful admin opcodes: 00:24:26.655 8, 9, 10, 24, 00:24:26.655 Dumping successful io opcodes: 00:24:26.655 0, 9, 00:24:26.655 NS: 0x200003aeff00 I/O qp, Total commands completed: 461576, total successful commands: 2669, random_seed: 1860949056 00:24:26.655 NS: 0x200003aeff00 admin qp, Total commands completed: 54068, total successful commands: 435, random_seed: 2748204992 00:24:26.655 10:45:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:26.656 Fuzzing completed. Shutting down the fuzz application 00:24:26.656 00:24:26.656 Dumping successful admin opcodes: 00:24:26.656 24, 00:24:26.656 Dumping successful io opcodes: 00:24:26.656 00:24:26.656 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2757137593 00:24:26.656 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2757272241 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.656 rmmod nvme_tcp 00:24:26.656 rmmod nvme_fabrics 00:24:26.656 rmmod nvme_keyring 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3866037 ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3866037 ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3866037' 00:24:26.656 killing process with pid 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3866037 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.656 10:45:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.562 10:45:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.562 10:45:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:28.562 00:24:28.563 real 0m36.431s 00:24:28.563 user 0m51.238s 00:24:28.563 sys 0m13.874s 00:24:28.563 10:45:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:28.563 10:45:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.563 ************************************ 00:24:28.563 END TEST nvmf_fuzz 00:24:28.563 ************************************ 00:24:28.563 10:45:17 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:28.563 10:45:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:28.563 10:45:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:28.563 10:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.821 ************************************ 00:24:28.821 START TEST nvmf_multiconnection 00:24:28.821 ************************************ 00:24:28.821 10:45:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:28.821 * Looking for test storage... 00:24:28.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.821 10:45:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.821 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:28.821 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.822 10:45:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:30.729 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:30.729 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:30.729 Found net devices under 0000:08:00.0: cvl_0_0 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:30.729 Found net devices under 0000:08:00.1: cvl_0_1 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.729 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:30.730 00:24:30.730 --- 10.0.0.2 ping statistics --- 00:24:30.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.730 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:24:30.730 00:24:30.730 --- 10.0.0.1 ping statistics --- 00:24:30.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.730 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3871023 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3871023 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3871023 ']' 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:30.730 10:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.730 [2024-07-23 10:45:18.948730] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:30.730 [2024-07-23 10:45:18.948830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.730 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.730 [2024-07-23 10:45:19.018073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.730 [2024-07-23 10:45:19.107452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.730 [2024-07-23 10:45:19.107514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.730 [2024-07-23 10:45:19.107531] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.730 [2024-07-23 10:45:19.107549] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.730 [2024-07-23 10:45:19.107561] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.730 [2024-07-23 10:45:19.107653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.730 [2024-07-23 10:45:19.107743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.730 [2024-07-23 10:45:19.107830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.730 [2024-07-23 10:45:19.107834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.730 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:30.730 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:30.730 10:45:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.730 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.730 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 [2024-07-23 10:45:19.250135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 Malloc1 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 [2024-07-23 10:45:19.304570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 Malloc2 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 Malloc3 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 Malloc4 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 Malloc5 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.989 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 Malloc6 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 Malloc7 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 Malloc8 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.248 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 Malloc9 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 Malloc10 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 Malloc11 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.249 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.507 10:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:32.073 10:45:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:32.073 10:45:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:32.073 10:45:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.073 10:45:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:32.073 10:45:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.999 10:45:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:34.258 10:45:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:34.258 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:34.258 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.258 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:34.258 10:45:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.787 10:45:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:37.046 10:45:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:37.046 10:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.046 10:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.046 10:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.046 10:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:38.944 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:38.944 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:38.944 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:38.944 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:38.944 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.945 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:38.945 10:45:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.945 10:45:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:39.509 10:45:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:39.509 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:39.509 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.509 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:39.509 10:45:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.406 10:45:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:41.971 10:45:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:41.971 10:45:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:41.971 10:45:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.971 10:45:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:41.971 10:45:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.497 10:45:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:44.755 10:45:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:44.755 10:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:44.755 10:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.755 10:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:44.755 10:45:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.657 10:45:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:47.589 10:45:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:47.589 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.589 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.589 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.589 10:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.487 10:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:50.052 10:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:50.052 10:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.052 10:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.052 10:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.052 10:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.951 10:45:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:52.886 10:45:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:52.886 10:45:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.886 10:45:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.886 10:45:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.886 10:45:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.784 10:45:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:55.349 10:45:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:55.349 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.349 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.349 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.349 10:45:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.247 10:45:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:58.180 10:45:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:58.180 10:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.180 10:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.180 10:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.180 10:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.077 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.077 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.077 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:00.077 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.078 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.078 10:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.078 10:45:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:00.078 [global] 00:25:00.078 thread=1 00:25:00.078 invalidate=1 00:25:00.078 rw=read 00:25:00.078 time_based=1 00:25:00.078 runtime=10 00:25:00.078 ioengine=libaio 00:25:00.078 direct=1 00:25:00.078 bs=262144 00:25:00.078 iodepth=64 00:25:00.078 norandommap=1 00:25:00.078 numjobs=1 00:25:00.078 00:25:00.078 [job0] 00:25:00.078 filename=/dev/nvme0n1 00:25:00.078 [job1] 00:25:00.078 filename=/dev/nvme10n1 00:25:00.078 [job2] 00:25:00.078 filename=/dev/nvme1n1 00:25:00.078 [job3] 00:25:00.078 filename=/dev/nvme2n1 00:25:00.078 [job4] 00:25:00.078 filename=/dev/nvme3n1 00:25:00.078 [job5] 00:25:00.078 filename=/dev/nvme4n1 00:25:00.078 [job6] 00:25:00.078 filename=/dev/nvme5n1 00:25:00.078 [job7] 00:25:00.078 filename=/dev/nvme6n1 00:25:00.078 [job8] 00:25:00.078 filename=/dev/nvme7n1 00:25:00.078 [job9] 00:25:00.078 filename=/dev/nvme8n1 00:25:00.078 [job10] 00:25:00.078 filename=/dev/nvme9n1 00:25:00.336 Could not set queue depth (nvme0n1) 00:25:00.336 Could not set queue depth (nvme10n1) 00:25:00.336 Could not set queue depth (nvme1n1) 00:25:00.336 Could not set queue depth (nvme2n1) 00:25:00.336 Could not set queue depth (nvme3n1) 00:25:00.336 Could not set queue depth (nvme4n1) 00:25:00.336 Could not set queue depth (nvme5n1) 00:25:00.336 Could not set queue depth (nvme6n1) 00:25:00.336 Could not set queue depth (nvme7n1) 00:25:00.336 Could not set queue depth (nvme8n1) 00:25:00.336 Could not set queue depth (nvme9n1) 00:25:00.336 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.336 fio-3.35 00:25:00.336 Starting 11 threads 00:25:12.585 00:25:12.585 job0: (groupid=0, jobs=1): err= 0: pid=3874266: Tue Jul 23 10:45:59 2024 00:25:12.585 read: IOPS=746, BW=187MiB/s (196MB/s)(1870MiB/10016msec) 00:25:12.585 slat (usec): min=10, max=121213, avg=943.33, stdev=5233.46 00:25:12.585 clat (usec): min=1763, max=398242, avg=84645.32, stdev=64475.95 00:25:12.585 lat (usec): min=1818, max=398298, avg=85588.65, stdev=65480.47 00:25:12.585 clat percentiles (msec): 00:25:12.585 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 34], 00:25:12.585 | 30.00th=[ 43], 40.00th=[ 52], 50.00th=[ 66], 60.00th=[ 80], 00:25:12.585 | 70.00th=[ 106], 80.00th=[ 136], 90.00th=[ 165], 95.00th=[ 230], 00:25:12.585 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 326], 00:25:12.585 | 99.99th=[ 397] 00:25:12.585 bw ( KiB/s): min=56832, max=435200, per=10.52%, avg=189900.80, stdev=102180.80, samples=20 00:25:12.585 iops : min= 222, max= 1700, avg=741.80, stdev=399.14, samples=20 00:25:12.585 lat (msec) : 2=0.19%, 4=0.67%, 10=1.66%, 20=4.53%, 50=32.09% 00:25:12.585 lat (msec) : 100=29.55%, 250=27.51%, 500=3.80% 00:25:12.585 cpu : usr=0.50%, sys=2.41%, ctx=1148, majf=0, minf=4097 00:25:12.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:12.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.585 issued rwts: total=7481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.585 job1: (groupid=0, jobs=1): err= 0: pid=3874269: Tue Jul 23 10:45:59 2024 00:25:12.585 read: IOPS=587, BW=147MiB/s (154MB/s)(1486MiB/10116msec) 00:25:12.585 slat (usec): min=14, max=177494, avg=1476.10, stdev=6370.04 00:25:12.585 clat (usec): min=729, max=460639, avg=107291.75, stdev=60595.94 00:25:12.585 lat (usec): min=751, max=473661, avg=108767.85, stdev=61558.68 00:25:12.585 clat percentiles (msec): 00:25:12.585 | 1.00th=[ 5], 5.00th=[ 43], 10.00th=[ 53], 20.00th=[ 61], 00:25:12.585 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 102], 00:25:12.585 | 70.00th=[ 134], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 215], 00:25:12.585 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 443], 00:25:12.585 | 99.99th=[ 460] 00:25:12.585 bw ( KiB/s): min=53248, max=258048, per=8.34%, avg=150502.40, stdev=62335.86, samples=20 00:25:12.585 iops : min= 208, max= 1008, avg=587.90, stdev=243.50, samples=20 00:25:12.585 lat (usec) : 750=0.03%, 1000=0.03% 00:25:12.585 lat (msec) : 2=0.02%, 4=0.77%, 10=0.56%, 20=0.13%, 50=6.83% 00:25:12.585 lat (msec) : 100=50.85%, 250=36.80%, 500=3.97% 00:25:12.585 cpu : usr=0.39%, sys=2.13%, ctx=1011, majf=0, minf=4097 00:25:12.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:12.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.585 issued rwts: total=5943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.585 job2: (groupid=0, jobs=1): err= 0: pid=3874282: Tue Jul 23 10:45:59 2024 00:25:12.585 read: IOPS=627, BW=157MiB/s (164MB/s)(1582MiB/10085msec) 00:25:12.585 slat (usec): min=14, max=196083, avg=1206.79, stdev=5849.70 00:25:12.585 clat (usec): min=1032, max=434887, avg=100641.82, stdev=52026.40 00:25:12.586 lat (usec): min=1052, max=434932, avg=101848.60, stdev=52775.90 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 59], 00:25:12.586 | 30.00th=[ 70], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 109], 00:25:12.586 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 182], 00:25:12.586 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 334], 99.95th=[ 338], 00:25:12.586 | 99.99th=[ 435] 00:25:12.586 bw ( KiB/s): min=56320, max=248320, per=8.88%, avg=160358.40, stdev=51960.09, samples=20 00:25:12.586 iops : min= 220, max= 970, avg=626.40, stdev=202.97, samples=20 00:25:12.586 lat (msec) : 2=0.11%, 4=0.33%, 10=0.43%, 20=2.99%, 50=10.10% 00:25:12.586 lat (msec) : 100=37.41%, 250=46.53%, 500=2.10% 00:25:12.586 cpu : usr=0.35%, sys=2.41%, ctx=1019, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=6327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job3: (groupid=0, jobs=1): err= 0: pid=3874290: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=580, BW=145MiB/s (152MB/s)(1468MiB/10111msec) 00:25:12.586 slat (usec): min=10, max=165061, avg=1361.30, stdev=5522.17 00:25:12.586 clat (usec): min=1296, max=392370, avg=108692.79, stdev=55967.13 00:25:12.586 lat (usec): min=1342, max=465403, avg=110054.09, stdev=56780.98 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 66], 00:25:12.586 | 30.00th=[ 77], 40.00th=[ 89], 50.00th=[ 106], 60.00th=[ 123], 00:25:12.586 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 171], 95.00th=[ 197], 00:25:12.586 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 393], 99.95th=[ 393], 00:25:12.586 | 99.99th=[ 393] 00:25:12.586 bw ( KiB/s): min=69632, max=245760, per=8.23%, avg=148659.20, stdev=54784.45, samples=20 00:25:12.586 iops : min= 272, max= 960, avg=580.70, stdev=214.00, samples=20 00:25:12.586 lat (msec) : 2=0.10%, 4=0.63%, 10=1.18%, 20=2.86%, 50=8.19% 00:25:12.586 lat (msec) : 100=34.07%, 250=50.95%, 500=2.01% 00:25:12.586 cpu : usr=0.40%, sys=2.15%, ctx=1012, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=5870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job4: (groupid=0, jobs=1): err= 0: pid=3874295: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=964, BW=241MiB/s (253MB/s)(2429MiB/10077msec) 00:25:12.586 slat (usec): min=14, max=68244, avg=816.06, stdev=3147.45 00:25:12.586 clat (msec): min=2, max=193, avg=65.46, stdev=35.11 00:25:12.586 lat (msec): min=3, max=226, avg=66.28, stdev=35.43 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 32], 00:25:12.586 | 30.00th=[ 45], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 67], 00:25:12.586 | 70.00th=[ 78], 80.00th=[ 93], 90.00th=[ 118], 95.00th=[ 136], 00:25:12.586 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 182], 00:25:12.586 | 99.99th=[ 194] 00:25:12.586 bw ( KiB/s): min=117248, max=593408, per=13.69%, avg=247158.25, stdev=110600.08, samples=20 00:25:12.586 iops : min= 458, max= 2318, avg=965.45, stdev=432.04, samples=20 00:25:12.586 lat (msec) : 4=0.12%, 10=1.02%, 20=2.28%, 50=32.30%, 100=48.24% 00:25:12.586 lat (msec) : 250=16.03% 00:25:12.586 cpu : usr=0.53%, sys=3.48%, ctx=1486, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=9717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job5: (groupid=0, jobs=1): err= 0: pid=3874317: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=808, BW=202MiB/s (212MB/s)(2030MiB/10042msec) 00:25:12.586 slat (usec): min=15, max=68556, avg=1002.54, stdev=3881.29 00:25:12.586 clat (usec): min=999, max=221907, avg=78031.67, stdev=53401.55 00:25:12.586 lat (usec): min=1028, max=253394, avg=79034.21, stdev=54098.85 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 32], 00:25:12.586 | 30.00th=[ 35], 40.00th=[ 40], 50.00th=[ 57], 60.00th=[ 87], 00:25:12.586 | 70.00th=[ 117], 80.00th=[ 138], 90.00th=[ 157], 95.00th=[ 169], 00:25:12.586 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 222], 00:25:12.586 | 99.99th=[ 222] 00:25:12.586 bw ( KiB/s): min=95744, max=536576, per=11.43%, avg=206284.80, stdev=141353.97, samples=20 00:25:12.586 iops : min= 374, max= 2096, avg=805.80, stdev=552.16, samples=20 00:25:12.586 lat (usec) : 1000=0.01% 00:25:12.586 lat (msec) : 2=0.34%, 4=0.55%, 10=2.54%, 20=2.97%, 50=40.98% 00:25:12.586 lat (msec) : 100=15.87%, 250=36.73% 00:25:12.586 cpu : usr=0.50%, sys=2.67%, ctx=1210, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=8121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job6: (groupid=0, jobs=1): err= 0: pid=3874335: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=440, BW=110MiB/s (115MB/s)(1114MiB/10114msec) 00:25:12.586 slat (usec): min=14, max=161042, avg=1648.20, stdev=7410.15 00:25:12.586 clat (msec): min=22, max=414, avg=143.41, stdev=61.06 00:25:12.586 lat (msec): min=22, max=438, avg=145.06, stdev=62.35 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 54], 5.00th=[ 68], 10.00th=[ 77], 20.00th=[ 89], 00:25:12.586 | 30.00th=[ 105], 40.00th=[ 121], 50.00th=[ 138], 60.00th=[ 150], 00:25:12.586 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 230], 95.00th=[ 284], 00:25:12.586 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 388], 99.95th=[ 397], 00:25:12.586 | 99.99th=[ 414] 00:25:12.586 bw ( KiB/s): min=49664, max=214528, per=6.23%, avg=112395.85, stdev=38379.31, samples=20 00:25:12.586 iops : min= 194, max= 838, avg=439.00, stdev=149.91, samples=20 00:25:12.586 lat (msec) : 50=0.38%, 100=26.96%, 250=63.94%, 500=8.71% 00:25:12.586 cpu : usr=0.35%, sys=1.69%, ctx=834, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=4454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job7: (groupid=0, jobs=1): err= 0: pid=3874343: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=457, BW=114MiB/s (120MB/s)(1149MiB/10038msec) 00:25:12.586 slat (usec): min=14, max=167299, avg=1673.67, stdev=7294.85 00:25:12.586 clat (msec): min=9, max=457, avg=137.92, stdev=59.74 00:25:12.586 lat (msec): min=9, max=457, avg=139.59, stdev=60.83 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 23], 5.00th=[ 38], 10.00th=[ 57], 20.00th=[ 91], 00:25:12.586 | 30.00th=[ 112], 40.00th=[ 128], 50.00th=[ 138], 60.00th=[ 150], 00:25:12.586 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 211], 95.00th=[ 257], 00:25:12.586 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 321], 00:25:12.586 | 99.99th=[ 460] 00:25:12.586 bw ( KiB/s): min=63488, max=227840, per=6.43%, avg=116070.40, stdev=34785.12, samples=20 00:25:12.586 iops : min= 248, max= 890, avg=453.40, stdev=135.88, samples=20 00:25:12.586 lat (msec) : 10=0.07%, 20=0.70%, 50=7.16%, 100=15.47%, 250=70.26% 00:25:12.586 lat (msec) : 500=6.35% 00:25:12.586 cpu : usr=0.31%, sys=1.70%, ctx=843, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=4597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job8: (groupid=0, jobs=1): err= 0: pid=3874380: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=561, BW=140MiB/s (147MB/s)(1419MiB/10114msec) 00:25:12.586 slat (usec): min=10, max=127750, avg=1131.81, stdev=5688.74 00:25:12.586 clat (usec): min=1097, max=404023, avg=112727.99, stdev=66844.85 00:25:12.586 lat (usec): min=1149, max=437783, avg=113859.80, stdev=67940.27 00:25:12.586 clat percentiles (msec): 00:25:12.586 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 61], 00:25:12.586 | 30.00th=[ 78], 40.00th=[ 89], 50.00th=[ 103], 60.00th=[ 121], 00:25:12.586 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 186], 95.00th=[ 264], 00:25:12.586 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 405], 00:25:12.586 | 99.99th=[ 405] 00:25:12.586 bw ( KiB/s): min=56832, max=236032, per=7.96%, avg=143718.40, stdev=56493.69, samples=20 00:25:12.586 iops : min= 222, max= 922, avg=561.40, stdev=220.68, samples=20 00:25:12.586 lat (msec) : 2=0.30%, 4=0.42%, 10=1.30%, 20=2.73%, 50=10.90% 00:25:12.586 lat (msec) : 100=33.12%, 250=45.13%, 500=6.09% 00:25:12.586 cpu : usr=0.29%, sys=2.00%, ctx=1104, majf=0, minf=4097 00:25:12.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:12.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.586 issued rwts: total=5677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.586 job9: (groupid=0, jobs=1): err= 0: pid=3874389: Tue Jul 23 10:45:59 2024 00:25:12.586 read: IOPS=825, BW=206MiB/s (216MB/s)(2081MiB/10082msec) 00:25:12.586 slat (usec): min=10, max=101682, avg=885.18, stdev=3827.65 00:25:12.586 clat (usec): min=1316, max=231056, avg=76533.16, stdev=43297.82 00:25:12.586 lat (usec): min=1340, max=231110, avg=77418.35, stdev=43798.13 00:25:12.586 clat percentiles (msec): 00:25:12.587 | 1.00th=[ 13], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 36], 00:25:12.587 | 30.00th=[ 45], 40.00th=[ 55], 50.00th=[ 68], 60.00th=[ 83], 00:25:12.587 | 70.00th=[ 102], 80.00th=[ 115], 90.00th=[ 142], 95.00th=[ 157], 00:25:12.587 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 215], 00:25:12.587 | 99.99th=[ 232] 00:25:12.587 bw ( KiB/s): min=104448, max=482816, per=11.71%, avg=211430.40, stdev=89488.90, samples=20 00:25:12.587 iops : min= 408, max= 1886, avg=825.90, stdev=349.57, samples=20 00:25:12.587 lat (msec) : 2=0.08%, 4=0.37%, 10=0.20%, 20=2.55%, 50=33.29% 00:25:12.587 lat (msec) : 100=32.83%, 250=30.68% 00:25:12.587 cpu : usr=0.52%, sys=2.39%, ctx=1212, majf=0, minf=4097 00:25:12.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:12.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.587 issued rwts: total=8322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.587 job10: (groupid=0, jobs=1): err= 0: pid=3874392: Tue Jul 23 10:45:59 2024 00:25:12.587 read: IOPS=477, BW=119MiB/s (125MB/s)(1208MiB/10111msec) 00:25:12.587 slat (usec): min=11, max=79156, avg=1557.38, stdev=5327.19 00:25:12.587 clat (msec): min=9, max=275, avg=132.19, stdev=38.08 00:25:12.587 lat (msec): min=9, max=275, avg=133.74, stdev=38.74 00:25:12.587 clat percentiles (msec): 00:25:12.587 | 1.00th=[ 32], 5.00th=[ 60], 10.00th=[ 80], 20.00th=[ 106], 00:25:12.587 | 30.00th=[ 117], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 144], 00:25:12.587 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 188], 00:25:12.587 | 99.00th=[ 215], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 264], 00:25:12.587 | 99.99th=[ 275] 00:25:12.587 bw ( KiB/s): min=84992, max=226816, per=6.76%, avg=122035.20, stdev=30533.17, samples=20 00:25:12.587 iops : min= 332, max= 886, avg=476.70, stdev=119.27, samples=20 00:25:12.587 lat (msec) : 10=0.04%, 20=0.14%, 50=3.00%, 100=13.98%, 250=82.38% 00:25:12.587 lat (msec) : 500=0.46% 00:25:12.587 cpu : usr=0.32%, sys=1.66%, ctx=941, majf=0, minf=3972 00:25:12.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:12.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.587 issued rwts: total=4830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.587 00:25:12.587 Run status group 0 (all jobs): 00:25:12.587 READ: bw=1763MiB/s (1849MB/s), 110MiB/s-241MiB/s (115MB/s-253MB/s), io=17.4GiB (18.7GB), run=10016-10116msec 00:25:12.587 00:25:12.587 Disk stats (read/write): 00:25:12.587 nvme0n1: ios=14720/0, merge=0/0, ticks=1243248/0, in_queue=1243248, util=97.19% 00:25:12.587 nvme10n1: ios=11745/0, merge=0/0, ticks=1229366/0, in_queue=1229366, util=97.36% 00:25:12.587 nvme1n1: ios=12469/0, merge=0/0, ticks=1242694/0, in_queue=1242694, util=97.66% 00:25:12.587 nvme2n1: ios=11590/0, merge=0/0, ticks=1227315/0, in_queue=1227315, util=97.79% 00:25:12.587 nvme3n1: ios=19224/0, merge=0/0, ticks=1239886/0, in_queue=1239886, util=97.83% 00:25:12.587 nvme4n1: ios=15957/0, merge=0/0, ticks=1240920/0, in_queue=1240920, util=98.22% 00:25:12.587 nvme5n1: ios=8740/0, merge=0/0, ticks=1223902/0, in_queue=1223902, util=98.35% 00:25:12.587 nvme6n1: ios=8977/0, merge=0/0, ticks=1240013/0, in_queue=1240013, util=98.44% 00:25:12.587 nvme7n1: ios=11024/0, merge=0/0, ticks=1229469/0, in_queue=1229469, util=98.85% 00:25:12.587 nvme8n1: ios=16441/0, merge=0/0, ticks=1238861/0, in_queue=1238861, util=99.04% 00:25:12.587 nvme9n1: ios=9472/0, merge=0/0, ticks=1237794/0, in_queue=1237794, util=99.21% 00:25:12.587 10:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:12.587 [global] 00:25:12.587 thread=1 00:25:12.587 invalidate=1 00:25:12.587 rw=randwrite 00:25:12.587 time_based=1 00:25:12.587 runtime=10 00:25:12.587 ioengine=libaio 00:25:12.587 direct=1 00:25:12.587 bs=262144 00:25:12.587 iodepth=64 00:25:12.587 norandommap=1 00:25:12.587 numjobs=1 00:25:12.587 00:25:12.587 [job0] 00:25:12.587 filename=/dev/nvme0n1 00:25:12.587 [job1] 00:25:12.587 filename=/dev/nvme10n1 00:25:12.587 [job2] 00:25:12.587 filename=/dev/nvme1n1 00:25:12.587 [job3] 00:25:12.587 filename=/dev/nvme2n1 00:25:12.587 [job4] 00:25:12.587 filename=/dev/nvme3n1 00:25:12.587 [job5] 00:25:12.587 filename=/dev/nvme4n1 00:25:12.587 [job6] 00:25:12.587 filename=/dev/nvme5n1 00:25:12.587 [job7] 00:25:12.587 filename=/dev/nvme6n1 00:25:12.587 [job8] 00:25:12.587 filename=/dev/nvme7n1 00:25:12.587 [job9] 00:25:12.587 filename=/dev/nvme8n1 00:25:12.587 [job10] 00:25:12.587 filename=/dev/nvme9n1 00:25:12.587 Could not set queue depth (nvme0n1) 00:25:12.587 Could not set queue depth (nvme10n1) 00:25:12.587 Could not set queue depth (nvme1n1) 00:25:12.587 Could not set queue depth (nvme2n1) 00:25:12.587 Could not set queue depth (nvme3n1) 00:25:12.587 Could not set queue depth (nvme4n1) 00:25:12.587 Could not set queue depth (nvme5n1) 00:25:12.587 Could not set queue depth (nvme6n1) 00:25:12.587 Could not set queue depth (nvme7n1) 00:25:12.587 Could not set queue depth (nvme8n1) 00:25:12.587 Could not set queue depth (nvme9n1) 00:25:12.587 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.587 fio-3.35 00:25:12.587 Starting 11 threads 00:25:22.607 00:25:22.607 job0: (groupid=0, jobs=1): err= 0: pid=3875245: Tue Jul 23 10:46:10 2024 00:25:22.607 write: IOPS=459, BW=115MiB/s (120MB/s)(1161MiB/10108msec); 0 zone resets 00:25:22.607 slat (usec): min=23, max=91962, avg=1445.95, stdev=4989.60 00:25:22.607 clat (msec): min=2, max=405, avg=137.79, stdev=92.38 00:25:22.607 lat (msec): min=3, max=405, avg=139.23, stdev=93.52 00:25:22.607 clat percentiles (msec): 00:25:22.607 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 50], 00:25:22.607 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 117], 60.00th=[ 150], 00:25:22.607 | 70.00th=[ 201], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 305], 00:25:22.607 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 384], 99.95th=[ 388], 00:25:22.607 | 99.99th=[ 405] 00:25:22.607 bw ( KiB/s): min=47104, max=223744, per=8.24%, avg=117266.40, stdev=52104.33, samples=20 00:25:22.607 iops : min= 184, max= 874, avg=458.05, stdev=203.55, samples=20 00:25:22.607 lat (msec) : 4=0.11%, 10=1.08%, 20=4.16%, 50=15.16%, 100=25.41% 00:25:22.607 lat (msec) : 250=41.02%, 500=13.07% 00:25:22.607 cpu : usr=1.32%, sys=1.58%, ctx=2937, majf=0, minf=1 00:25:22.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:22.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.607 issued rwts: total=0,4644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.607 job1: (groupid=0, jobs=1): err= 0: pid=3875246: Tue Jul 23 10:46:10 2024 00:25:22.607 write: IOPS=602, BW=151MiB/s (158MB/s)(1531MiB/10158msec); 0 zone resets 00:25:22.607 slat (usec): min=14, max=93790, avg=800.19, stdev=2843.89 00:25:22.607 clat (usec): min=919, max=412061, avg=105342.98, stdev=71636.21 00:25:22.607 lat (usec): min=955, max=412095, avg=106143.17, stdev=72074.37 00:25:22.607 clat percentiles (msec): 00:25:22.607 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 42], 00:25:22.607 | 30.00th=[ 56], 40.00th=[ 81], 50.00th=[ 97], 60.00th=[ 112], 00:25:22.607 | 70.00th=[ 128], 80.00th=[ 161], 90.00th=[ 213], 95.00th=[ 241], 00:25:22.607 | 99.00th=[ 296], 99.50th=[ 321], 99.90th=[ 397], 99.95th=[ 409], 00:25:22.607 | 99.99th=[ 414] 00:25:22.607 bw ( KiB/s): min=77824, max=272384, per=10.90%, avg=155090.40, stdev=47957.98, samples=20 00:25:22.607 iops : min= 304, max= 1064, avg=605.80, stdev=187.31, samples=20 00:25:22.607 lat (usec) : 1000=0.02% 00:25:22.607 lat (msec) : 2=0.44%, 4=1.27%, 10=3.20%, 20=4.77%, 50=18.00% 00:25:22.607 lat (msec) : 100=24.21%, 250=44.33%, 500=3.76% 00:25:22.607 cpu : usr=1.53%, sys=2.20%, ctx=4099, majf=0, minf=1 00:25:22.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:22.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.607 issued rwts: total=0,6122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.607 job2: (groupid=0, jobs=1): err= 0: pid=3875247: Tue Jul 23 10:46:10 2024 00:25:22.607 write: IOPS=575, BW=144MiB/s (151MB/s)(1458MiB/10128msec); 0 zone resets 00:25:22.607 slat (usec): min=18, max=161845, avg=1095.43, stdev=4518.80 00:25:22.608 clat (usec): min=714, max=461474, avg=109966.93, stdev=90520.96 00:25:22.608 lat (usec): min=751, max=463819, avg=111062.36, stdev=91588.96 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 35], 00:25:22.608 | 30.00th=[ 46], 40.00th=[ 67], 50.00th=[ 85], 60.00th=[ 107], 00:25:22.608 | 70.00th=[ 148], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 271], 00:25:22.608 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 443], 99.95th=[ 456], 00:25:22.608 | 99.99th=[ 464] 00:25:22.608 bw ( KiB/s): min=43008, max=316416, per=10.38%, avg=147714.35, stdev=64315.88, samples=20 00:25:22.608 iops : min= 168, max= 1236, avg=576.95, stdev=251.30, samples=20 00:25:22.608 lat (usec) : 750=0.02%, 1000=0.05% 00:25:22.608 lat (msec) : 2=0.41%, 4=1.13%, 10=5.26%, 20=6.77%, 50=19.34% 00:25:22.608 lat (msec) : 100=24.58%, 250=35.54%, 500=6.89% 00:25:22.608 cpu : usr=1.64%, sys=1.89%, ctx=3780, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,5833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job3: (groupid=0, jobs=1): err= 0: pid=3875260: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=457, BW=114MiB/s (120MB/s)(1164MiB/10178msec); 0 zone resets 00:25:22.608 slat (usec): min=18, max=154618, avg=1302.70, stdev=4669.06 00:25:22.608 clat (usec): min=947, max=497767, avg=138538.25, stdev=90345.61 00:25:22.608 lat (usec): min=983, max=515528, avg=139840.96, stdev=91262.72 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 22], 20.00th=[ 49], 00:25:22.608 | 30.00th=[ 79], 40.00th=[ 105], 50.00th=[ 130], 60.00th=[ 159], 00:25:22.608 | 70.00th=[ 190], 80.00th=[ 224], 90.00th=[ 259], 95.00th=[ 296], 00:25:22.608 | 99.00th=[ 342], 99.50th=[ 380], 99.90th=[ 481], 99.95th=[ 498], 00:25:22.608 | 99.99th=[ 498] 00:25:22.608 bw ( KiB/s): min=55296, max=275456, per=8.26%, avg=117559.80, stdev=53154.01, samples=20 00:25:22.608 iops : min= 216, max= 1076, avg=459.15, stdev=207.64, samples=20 00:25:22.608 lat (usec) : 1000=0.04% 00:25:22.608 lat (msec) : 2=0.39%, 4=1.61%, 10=2.96%, 20=4.36%, 50=11.39% 00:25:22.608 lat (msec) : 100=17.08%, 250=50.35%, 500=11.82% 00:25:22.608 cpu : usr=1.52%, sys=1.64%, ctx=2982, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,4655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job4: (groupid=0, jobs=1): err= 0: pid=3875261: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=522, BW=131MiB/s (137MB/s)(1312MiB/10044msec); 0 zone resets 00:25:22.608 slat (usec): min=21, max=82453, avg=1050.41, stdev=3556.05 00:25:22.608 clat (usec): min=1046, max=361550, avg=121404.83, stdev=72107.11 00:25:22.608 lat (usec): min=1111, max=365511, avg=122455.23, stdev=72787.33 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 57], 00:25:22.608 | 30.00th=[ 79], 40.00th=[ 94], 50.00th=[ 115], 60.00th=[ 131], 00:25:22.608 | 70.00th=[ 157], 80.00th=[ 184], 90.00th=[ 220], 95.00th=[ 249], 00:25:22.608 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 359], 00:25:22.608 | 99.99th=[ 363] 00:25:22.608 bw ( KiB/s): min=70656, max=201728, per=9.33%, avg=132726.40, stdev=37060.73, samples=20 00:25:22.608 iops : min= 276, max= 788, avg=518.45, stdev=144.78, samples=20 00:25:22.608 lat (msec) : 2=0.29%, 4=0.91%, 10=2.71%, 20=2.78%, 50=10.99% 00:25:22.608 lat (msec) : 100=24.77%, 250=52.67%, 500=4.88% 00:25:22.608 cpu : usr=1.33%, sys=2.11%, ctx=3500, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,5248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job5: (groupid=0, jobs=1): err= 0: pid=3875262: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=577, BW=144MiB/s (151MB/s)(1477MiB/10228msec); 0 zone resets 00:25:22.608 slat (usec): min=17, max=215479, avg=791.18, stdev=4968.72 00:25:22.608 clat (usec): min=1032, max=559436, avg=109768.29, stdev=96688.38 00:25:22.608 lat (usec): min=1086, max=576041, avg=110559.48, stdev=97439.46 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 20], 00:25:22.608 | 30.00th=[ 33], 40.00th=[ 63], 50.00th=[ 87], 60.00th=[ 117], 00:25:22.608 | 70.00th=[ 150], 80.00th=[ 199], 90.00th=[ 251], 95.00th=[ 279], 00:25:22.608 | 99.00th=[ 384], 99.50th=[ 464], 99.90th=[ 489], 99.95th=[ 493], 00:25:22.608 | 99.99th=[ 558] 00:25:22.608 bw ( KiB/s): min=57856, max=293812, per=10.51%, avg=149525.80, stdev=62386.78, samples=20 00:25:22.608 iops : min= 226, max= 1147, avg=584.05, stdev=243.61, samples=20 00:25:22.608 lat (msec) : 2=0.56%, 4=3.54%, 10=7.96%, 20=8.64%, 50=16.17% 00:25:22.608 lat (msec) : 100=18.88%, 250=34.10%, 500=10.13%, 750=0.03% 00:25:22.608 cpu : usr=1.63%, sys=2.19%, ctx=4560, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,5906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job6: (groupid=0, jobs=1): err= 0: pid=3875263: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=509, BW=127MiB/s (134MB/s)(1302MiB/10210msec); 0 zone resets 00:25:22.608 slat (usec): min=17, max=163441, avg=993.96, stdev=4373.90 00:25:22.608 clat (usec): min=821, max=555175, avg=124428.30, stdev=100159.21 00:25:22.608 lat (usec): min=877, max=555241, avg=125422.26, stdev=101044.87 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 12], 20.00th=[ 31], 00:25:22.608 | 30.00th=[ 50], 40.00th=[ 74], 50.00th=[ 92], 60.00th=[ 133], 00:25:22.608 | 70.00th=[ 190], 80.00th=[ 220], 90.00th=[ 259], 95.00th=[ 313], 00:25:22.608 | 99.00th=[ 372], 99.50th=[ 405], 99.90th=[ 535], 99.95th=[ 558], 00:25:22.608 | 99.99th=[ 558] 00:25:22.608 bw ( KiB/s): min=35328, max=252928, per=9.26%, avg=131662.40, stdev=55324.27, samples=20 00:25:22.608 iops : min= 138, max= 988, avg=514.25, stdev=216.06, samples=20 00:25:22.608 lat (usec) : 1000=0.17% 00:25:22.608 lat (msec) : 2=0.71%, 4=2.23%, 10=5.28%, 20=6.99%, 50=14.83% 00:25:22.608 lat (msec) : 100=22.24%, 250=35.66%, 500=11.75%, 750=0.13% 00:25:22.608 cpu : usr=1.34%, sys=1.98%, ctx=3864, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,5207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job7: (groupid=0, jobs=1): err= 0: pid=3875264: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=479, BW=120MiB/s (126MB/s)(1223MiB/10209msec); 0 zone resets 00:25:22.608 slat (usec): min=17, max=101333, avg=1057.06, stdev=3900.47 00:25:22.608 clat (usec): min=797, max=509749, avg=132448.48, stdev=92790.16 00:25:22.608 lat (usec): min=863, max=509785, avg=133505.54, stdev=93686.82 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 43], 00:25:22.608 | 30.00th=[ 64], 40.00th=[ 95], 50.00th=[ 124], 60.00th=[ 142], 00:25:22.608 | 70.00th=[ 180], 80.00th=[ 218], 90.00th=[ 264], 95.00th=[ 296], 00:25:22.608 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 506], 99.95th=[ 506], 00:25:22.608 | 99.99th=[ 510] 00:25:22.608 bw ( KiB/s): min=57344, max=198144, per=8.69%, avg=123588.55, stdev=42460.55, samples=20 00:25:22.608 iops : min= 224, max= 774, avg=482.75, stdev=165.88, samples=20 00:25:22.608 lat (usec) : 1000=0.06% 00:25:22.608 lat (msec) : 2=0.41%, 4=1.14%, 10=2.76%, 20=5.21%, 50=14.15% 00:25:22.608 lat (msec) : 100=17.75%, 250=46.51%, 500=11.86%, 750=0.14% 00:25:22.608 cpu : usr=1.26%, sys=1.77%, ctx=3520, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,4891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job8: (groupid=0, jobs=1): err= 0: pid=3875265: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=400, BW=100MiB/s (105MB/s)(1013MiB/10117msec); 0 zone resets 00:25:22.608 slat (usec): min=19, max=64037, avg=1310.31, stdev=4756.79 00:25:22.608 clat (usec): min=1404, max=374034, avg=158362.49, stdev=99312.72 00:25:22.608 lat (usec): min=1471, max=376317, avg=159672.81, stdev=100440.11 00:25:22.608 clat percentiles (msec): 00:25:22.608 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 48], 00:25:22.608 | 30.00th=[ 79], 40.00th=[ 127], 50.00th=[ 171], 60.00th=[ 201], 00:25:22.608 | 70.00th=[ 224], 80.00th=[ 259], 90.00th=[ 288], 95.00th=[ 309], 00:25:22.608 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:25:22.608 | 99.99th=[ 376] 00:25:22.608 bw ( KiB/s): min=59904, max=212992, per=7.18%, avg=102132.85, stdev=44268.42, samples=20 00:25:22.608 iops : min= 234, max= 832, avg=398.95, stdev=172.92, samples=20 00:25:22.608 lat (msec) : 2=0.05%, 4=0.67%, 10=3.13%, 20=6.49%, 50=10.44% 00:25:22.608 lat (msec) : 100=14.09%, 250=42.07%, 500=23.07% 00:25:22.608 cpu : usr=1.19%, sys=1.50%, ctx=2916, majf=0, minf=1 00:25:22.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:22.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.608 issued rwts: total=0,4053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.608 job9: (groupid=0, jobs=1): err= 0: pid=3875266: Tue Jul 23 10:46:10 2024 00:25:22.608 write: IOPS=516, BW=129MiB/s (135MB/s)(1307MiB/10125msec); 0 zone resets 00:25:22.609 slat (usec): min=19, max=133562, avg=717.83, stdev=4114.92 00:25:22.609 clat (usec): min=761, max=614141, avg=123215.43, stdev=90407.04 00:25:22.609 lat (usec): min=787, max=614201, avg=123933.26, stdev=90983.25 00:25:22.609 clat percentiles (msec): 00:25:22.609 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 40], 00:25:22.609 | 30.00th=[ 63], 40.00th=[ 88], 50.00th=[ 114], 60.00th=[ 131], 00:25:22.609 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 251], 95.00th=[ 305], 00:25:22.609 | 99.00th=[ 368], 99.50th=[ 468], 99.90th=[ 514], 99.95th=[ 518], 00:25:22.609 | 99.99th=[ 617] 00:25:22.609 bw ( KiB/s): min=65536, max=200704, per=9.29%, avg=132169.30, stdev=33855.45, samples=20 00:25:22.609 iops : min= 256, max= 784, avg=516.20, stdev=132.29, samples=20 00:25:22.609 lat (usec) : 1000=0.10% 00:25:22.609 lat (msec) : 2=0.42%, 4=0.90%, 10=3.44%, 20=5.49%, 50=15.40% 00:25:22.609 lat (msec) : 100=18.20%, 250=45.83%, 500=10.01%, 750=0.21% 00:25:22.609 cpu : usr=1.52%, sys=1.95%, ctx=4127, majf=0, minf=1 00:25:22.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.609 issued rwts: total=0,5226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.609 job10: (groupid=0, jobs=1): err= 0: pid=3875267: Tue Jul 23 10:46:10 2024 00:25:22.609 write: IOPS=494, BW=124MiB/s (130MB/s)(1261MiB/10203msec); 0 zone resets 00:25:22.609 slat (usec): min=22, max=116249, avg=1106.00, stdev=4023.76 00:25:22.609 clat (usec): min=1125, max=437673, avg=128055.05, stdev=82136.49 00:25:22.609 lat (usec): min=1177, max=437720, avg=129161.05, stdev=82762.10 00:25:22.609 clat percentiles (msec): 00:25:22.609 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 54], 00:25:22.609 | 30.00th=[ 69], 40.00th=[ 86], 50.00th=[ 110], 60.00th=[ 140], 00:25:22.609 | 70.00th=[ 180], 80.00th=[ 207], 90.00th=[ 243], 95.00th=[ 275], 00:25:22.609 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 435], 99.95th=[ 439], 00:25:22.609 | 99.99th=[ 439] 00:25:22.609 bw ( KiB/s): min=61440, max=230912, per=8.97%, avg=127534.15, stdev=38770.76, samples=20 00:25:22.609 iops : min= 240, max= 902, avg=498.15, stdev=151.49, samples=20 00:25:22.609 lat (msec) : 2=0.12%, 4=0.26%, 10=1.59%, 20=3.01%, 50=12.80% 00:25:22.609 lat (msec) : 100=28.86%, 250=45.19%, 500=8.17% 00:25:22.609 cpu : usr=1.55%, sys=1.65%, ctx=3332, majf=0, minf=1 00:25:22.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.609 issued rwts: total=0,5045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.609 00:25:22.609 Run status group 0 (all jobs): 00:25:22.609 WRITE: bw=1389MiB/s (1457MB/s), 100MiB/s-151MiB/s (105MB/s-158MB/s), io=13.9GiB (14.9GB), run=10044-10228msec 00:25:22.609 00:25:22.609 Disk stats (read/write): 00:25:22.609 nvme0n1: ios=46/9076, merge=0/0, ticks=1530/1220608, in_queue=1222138, util=99.51% 00:25:22.609 nvme10n1: ios=49/12060, merge=0/0, ticks=48/1227841, in_queue=1227889, util=97.55% 00:25:22.609 nvme1n1: ios=44/11430, merge=0/0, ticks=886/1217411, in_queue=1218297, util=99.94% 00:25:22.609 nvme2n1: ios=33/9304, merge=0/0, ticks=36/1252949, in_queue=1252985, util=97.87% 00:25:22.609 nvme3n1: ios=22/10254, merge=0/0, ticks=36/1218912, in_queue=1218948, util=97.85% 00:25:22.609 nvme4n1: ios=46/11735, merge=0/0, ticks=2473/1236472, in_queue=1238945, util=100.00% 00:25:22.609 nvme5n1: ios=44/10372, merge=0/0, ticks=1811/1252351, in_queue=1254162, util=100.00% 00:25:22.609 nvme6n1: ios=47/9743, merge=0/0, ticks=853/1252161, in_queue=1253014, util=100.00% 00:25:22.609 nvme7n1: ios=43/7890, merge=0/0, ticks=1289/1213853, in_queue=1215142, util=100.00% 00:25:22.609 nvme8n1: ios=0/10244, merge=0/0, ticks=0/1227462, in_queue=1227462, util=98.94% 00:25:22.609 nvme9n1: ios=42/10059, merge=0/0, ticks=2790/1250200, in_queue=1252990, util=100.00% 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:22.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:22.609 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.609 10:46:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:22.868 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.868 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:23.127 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.127 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:23.385 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.385 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:23.642 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:23.642 10:46:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:23.642 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:23.642 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:23.643 10:46:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.643 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:23.901 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:23.901 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.901 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:24.159 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.159 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:24.417 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:24.417 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.417 rmmod nvme_tcp 00:25:24.417 rmmod nvme_fabrics 00:25:24.417 rmmod nvme_keyring 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3871023 ']' 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3871023 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3871023 ']' 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3871023 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:24.417 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.418 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3871023 00:25:24.676 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:24.676 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:24.676 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3871023' 00:25:24.676 killing process with pid 3871023 00:25:24.676 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3871023 00:25:24.676 10:46:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3871023 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.936 10:46:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.842 10:46:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.101 00:25:27.101 real 0m58.275s 00:25:27.101 user 3m12.287s 00:25:27.101 sys 0m25.336s 00:25:27.101 10:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:27.101 10:46:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.101 ************************************ 00:25:27.101 END TEST nvmf_multiconnection 00:25:27.101 ************************************ 00:25:27.101 10:46:15 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:27.101 10:46:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:27.101 10:46:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:27.101 10:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.101 ************************************ 00:25:27.101 START TEST nvmf_initiator_timeout 00:25:27.101 ************************************ 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:27.101 * Looking for test storage... 00:25:27.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.101 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.102 10:46:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:28.478 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:28.478 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:28.478 Found net devices under 0000:08:00.0: cvl_0_0 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.478 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:28.479 Found net devices under 0000:08:00.1: cvl_0_1 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.479 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.736 10:46:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:25:28.737 00:25:28.737 --- 10.0.0.2 ping statistics --- 00:25:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.737 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:25:28.737 00:25:28.737 --- 10.0.0.1 ping statistics --- 00:25:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.737 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3877913 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3877913 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3877913 ']' 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:28.737 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.737 [2024-07-23 10:46:17.154526] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:28.737 [2024-07-23 10:46:17.154621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.737 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.737 [2024-07-23 10:46:17.219095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.994 [2024-07-23 10:46:17.307088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.994 [2024-07-23 10:46:17.307161] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.994 [2024-07-23 10:46:17.307178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.994 [2024-07-23 10:46:17.307191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.994 [2024-07-23 10:46:17.307203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.994 [2024-07-23 10:46:17.307289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.994 [2024-07-23 10:46:17.307343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.994 [2024-07-23 10:46:17.307392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.994 [2024-07-23 10:46:17.307395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 Malloc0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 Delay0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 [2024-07-23 10:46:17.469629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.994 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.251 [2024-07-23 10:46:17.497894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.252 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.252 10:46:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:29.510 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:29.510 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:29.510 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.510 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:29.510 10:46:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3878196 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:32.038 10:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:32.038 [global] 00:25:32.038 thread=1 00:25:32.038 invalidate=1 00:25:32.038 rw=write 00:25:32.038 time_based=1 00:25:32.038 runtime=60 00:25:32.038 ioengine=libaio 00:25:32.038 direct=1 00:25:32.038 bs=4096 00:25:32.038 iodepth=1 00:25:32.038 norandommap=0 00:25:32.038 numjobs=1 00:25:32.038 00:25:32.038 verify_dump=1 00:25:32.038 verify_backlog=512 00:25:32.038 verify_state_save=0 00:25:32.038 do_verify=1 00:25:32.038 verify=crc32c-intel 00:25:32.038 [job0] 00:25:32.038 filename=/dev/nvme0n1 00:25:32.038 Could not set queue depth (nvme0n1) 00:25:32.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:32.038 fio-3.35 00:25:32.038 Starting 1 thread 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.567 true 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.567 true 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.567 true 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.567 true 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.567 10:46:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.847 true 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.847 true 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.847 true 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.847 true 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:37.847 10:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3878196 00:26:34.060 00:26:34.060 job0: (groupid=0, jobs=1): err= 0: pid=3878294: Tue Jul 23 10:47:20 2024 00:26:34.060 read: IOPS=67, BW=269KiB/s (276kB/s)(15.8MiB/60037msec) 00:26:34.060 slat (usec): min=5, max=6884, avg=14.30, stdev=108.24 00:26:34.060 clat (usec): min=227, max=40725k, avg=14594.94, stdev=640530.55 00:26:34.060 lat (usec): min=233, max=40725k, avg=14609.24, stdev=640530.99 00:26:34.060 clat percentiles (usec): 00:26:34.060 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 253], 00:26:34.060 | 20.00th=[ 265], 30.00th=[ 269], 40.00th=[ 273], 00:26:34.060 | 50.00th=[ 277], 60.00th=[ 281], 70.00th=[ 285], 00:26:34.060 | 80.00th=[ 289], 90.00th=[ 40633], 95.00th=[ 41157], 00:26:34.060 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 00:26:34.061 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:34.061 write: IOPS=68, BW=273KiB/s (279kB/s)(16.0MiB/60037msec); 0 zone resets 00:26:34.061 slat (nsec): min=6390, max=59799, avg=14485.98, stdev=6094.23 00:26:34.061 clat (usec): min=177, max=2061, avg=215.16, stdev=36.33 00:26:34.061 lat (usec): min=185, max=2079, avg=229.65, stdev=37.65 00:26:34.061 clat percentiles (usec): 00:26:34.061 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:26:34.061 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:26:34.061 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 262], 00:26:34.061 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 314], 99.95th=[ 318], 00:26:34.061 | 99.99th=[ 2057] 00:26:34.061 bw ( KiB/s): min= 1240, max= 8192, per=100.00%, avg=4681.14, stdev=2416.76, samples=7 00:26:34.061 iops : min= 310, max= 2048, avg=1170.29, stdev=604.19, samples=7 00:26:34.061 lat (usec) : 250=51.03%, 500=43.76% 00:26:34.061 lat (msec) : 4=0.01%, 50=5.18%, >=2000=0.01% 00:26:34.061 cpu : usr=0.12%, sys=0.21%, ctx=8140, majf=0, minf=1 00:26:34.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.061 issued rwts: total=4043,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:34.061 00:26:34.061 Run status group 0 (all jobs): 00:26:34.061 READ: bw=269KiB/s (276kB/s), 269KiB/s-269KiB/s (276kB/s-276kB/s), io=15.8MiB (16.6MB), run=60037-60037msec 00:26:34.061 WRITE: bw=273KiB/s (279kB/s), 273KiB/s-273KiB/s (279kB/s-279kB/s), io=16.0MiB (16.8MB), run=60037-60037msec 00:26:34.061 00:26:34.061 Disk stats (read/write): 00:26:34.061 nvme0n1: ios=4138/4096, merge=0/0, ticks=18782/829, in_queue=19611, util=99.84% 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:34.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:34.061 nvmf hotplug test: fio successful as expected 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.061 rmmod nvme_tcp 00:26:34.061 rmmod nvme_fabrics 00:26:34.061 rmmod nvme_keyring 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3877913 ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3877913 ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3877913' 00:26:34.061 killing process with pid 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3877913 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.061 10:47:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.641 10:47:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.641 00:26:34.641 real 1m7.483s 00:26:34.641 user 4m9.300s 00:26:34.641 sys 0m6.127s 00:26:34.641 10:47:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:34.641 10:47:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.641 ************************************ 00:26:34.641 END TEST nvmf_initiator_timeout 00:26:34.641 ************************************ 00:26:34.641 10:47:22 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:34.641 10:47:22 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:34.641 10:47:22 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:34.641 10:47:22 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.641 10:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:36.068 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:36.068 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:36.068 Found net devices under 0000:08:00.0: cvl_0_0 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:36.068 Found net devices under 0000:08:00.1: cvl_0_1 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:36.068 10:47:24 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:36.068 10:47:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:36.326 10:47:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:36.326 10:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:36.326 ************************************ 00:26:36.326 START TEST nvmf_perf_adq 00:26:36.326 ************************************ 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:36.326 * Looking for test storage... 00:26:36.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.326 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.327 10:47:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:38.232 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:38.232 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:38.232 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:38.233 Found net devices under 0000:08:00.0: cvl_0_0 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:38.233 Found net devices under 0000:08:00.1: cvl_0_1 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:38.233 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:38.490 10:47:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:40.390 10:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:45.667 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:45.667 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:45.667 Found net devices under 0000:08:00.0: cvl_0_0 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:45.667 Found net devices under 0000:08:00.1: cvl_0_1 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.667 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:26:45.668 00:26:45.668 --- 10.0.0.2 ping statistics --- 00:26:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.668 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:26:45.668 00:26:45.668 --- 10.0.0.1 ping statistics --- 00:26:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.668 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3887171 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3887171 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3887171 ']' 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:45.668 10:47:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.668 [2024-07-23 10:47:33.820626] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:45.668 [2024-07-23 10:47:33.820722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.668 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.668 [2024-07-23 10:47:33.886107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.668 [2024-07-23 10:47:33.974270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.668 [2024-07-23 10:47:33.974340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.668 [2024-07-23 10:47:33.974366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.668 [2024-07-23 10:47:33.974385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.668 [2024-07-23 10:47:33.974403] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.668 [2024-07-23 10:47:33.974503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.668 [2024-07-23 10:47:33.974571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.668 [2024-07-23 10:47:33.974534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.668 [2024-07-23 10:47:33.974564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.668 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 [2024-07-23 10:47:34.256034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 Malloc1 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.927 [2024-07-23 10:47:34.306168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3887204 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:45.927 10:47:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.928 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.830 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:47.830 10:47:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 10:47:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:48.088 "tick_rate": 2700000000, 00:26:48.088 "poll_groups": [ 00:26:48.088 { 00:26:48.088 "name": "nvmf_tgt_poll_group_000", 00:26:48.088 "admin_qpairs": 1, 00:26:48.088 "io_qpairs": 1, 00:26:48.088 "current_admin_qpairs": 1, 00:26:48.088 "current_io_qpairs": 1, 00:26:48.088 "pending_bdev_io": 0, 00:26:48.088 "completed_nvme_io": 18299, 00:26:48.088 "transports": [ 00:26:48.088 { 00:26:48.088 "trtype": "TCP" 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "nvmf_tgt_poll_group_001", 00:26:48.088 "admin_qpairs": 0, 00:26:48.088 "io_qpairs": 1, 00:26:48.088 "current_admin_qpairs": 0, 00:26:48.088 "current_io_qpairs": 1, 00:26:48.088 "pending_bdev_io": 0, 00:26:48.088 "completed_nvme_io": 18388, 00:26:48.088 "transports": [ 00:26:48.088 { 00:26:48.088 "trtype": "TCP" 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "nvmf_tgt_poll_group_002", 00:26:48.088 "admin_qpairs": 0, 00:26:48.088 "io_qpairs": 1, 00:26:48.088 "current_admin_qpairs": 0, 00:26:48.088 "current_io_qpairs": 1, 00:26:48.088 "pending_bdev_io": 0, 00:26:48.088 "completed_nvme_io": 18815, 00:26:48.088 "transports": [ 00:26:48.088 { 00:26:48.088 "trtype": "TCP" 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "nvmf_tgt_poll_group_003", 00:26:48.088 "admin_qpairs": 0, 00:26:48.088 "io_qpairs": 1, 00:26:48.088 "current_admin_qpairs": 0, 00:26:48.088 "current_io_qpairs": 1, 00:26:48.088 "pending_bdev_io": 0, 00:26:48.088 "completed_nvme_io": 17924, 00:26:48.088 "transports": [ 00:26:48.088 { 00:26:48.088 "trtype": "TCP" 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 }' 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:48.088 10:47:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3887204 00:26:56.202 Initializing NVMe Controllers 00:26:56.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:56.202 Initialization complete. Launching workers. 00:26:56.202 ======================================================== 00:26:56.202 Latency(us) 00:26:56.202 Device Information : IOPS MiB/s Average min max 00:26:56.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9888.50 38.63 6471.87 1974.01 10603.44 00:26:56.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9597.60 37.49 6669.00 2313.33 10783.07 00:26:56.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9373.80 36.62 6829.00 2901.85 13136.59 00:26:56.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9569.30 37.38 6689.47 2829.64 9617.42 00:26:56.202 ======================================================== 00:26:56.202 Total : 38429.20 150.11 6662.40 1974.01 13136.59 00:26:56.202 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.202 rmmod nvme_tcp 00:26:56.202 rmmod nvme_fabrics 00:26:56.202 rmmod nvme_keyring 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3887171 ']' 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3887171 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3887171 ']' 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3887171 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3887171 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3887171' 00:26:56.202 killing process with pid 3887171 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3887171 00:26:56.202 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3887171 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.462 10:47:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.370 10:47:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.370 10:47:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:58.370 10:47:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:58.938 10:47:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:00.841 10:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:06.121 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:06.121 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:06.121 Found net devices under 0000:08:00.0: cvl_0_0 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:06.121 Found net devices under 0000:08:00.1: cvl_0_1 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.121 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:27:06.122 00:27:06.122 --- 10.0.0.2 ping statistics --- 00:27:06.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.122 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:06.122 00:27:06.122 --- 10.0.0.1 ping statistics --- 00:27:06.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.122 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:06.122 net.core.busy_poll = 1 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:06.122 net.core.busy_read = 1 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3889205 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3889205 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3889205 ']' 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.122 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.122 [2024-07-23 10:47:54.499131] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:06.122 [2024-07-23 10:47:54.499219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.122 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.122 [2024-07-23 10:47:54.564670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.381 [2024-07-23 10:47:54.644570] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.381 [2024-07-23 10:47:54.644629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.381 [2024-07-23 10:47:54.644642] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.381 [2024-07-23 10:47:54.644653] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.381 [2024-07-23 10:47:54.644662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.381 [2024-07-23 10:47:54.644716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.381 [2024-07-23 10:47:54.644744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.381 [2024-07-23 10:47:54.644798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.381 [2024-07-23 10:47:54.644823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.381 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.641 [2024-07-23 10:47:54.886886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.641 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.642 Malloc1 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.642 [2024-07-23 10:47:54.936098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3889321 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:06.642 10:47:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:06.642 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:08.547 "tick_rate": 2700000000, 00:27:08.547 "poll_groups": [ 00:27:08.547 { 00:27:08.547 "name": "nvmf_tgt_poll_group_000", 00:27:08.547 "admin_qpairs": 1, 00:27:08.547 "io_qpairs": 1, 00:27:08.547 "current_admin_qpairs": 1, 00:27:08.547 "current_io_qpairs": 1, 00:27:08.547 "pending_bdev_io": 0, 00:27:08.547 "completed_nvme_io": 23163, 00:27:08.547 "transports": [ 00:27:08.547 { 00:27:08.547 "trtype": "TCP" 00:27:08.547 } 00:27:08.547 ] 00:27:08.547 }, 00:27:08.547 { 00:27:08.547 "name": "nvmf_tgt_poll_group_001", 00:27:08.547 "admin_qpairs": 0, 00:27:08.547 "io_qpairs": 3, 00:27:08.547 "current_admin_qpairs": 0, 00:27:08.547 "current_io_qpairs": 3, 00:27:08.547 "pending_bdev_io": 0, 00:27:08.547 "completed_nvme_io": 23716, 00:27:08.547 "transports": [ 00:27:08.547 { 00:27:08.547 "trtype": "TCP" 00:27:08.547 } 00:27:08.547 ] 00:27:08.547 }, 00:27:08.547 { 00:27:08.547 "name": "nvmf_tgt_poll_group_002", 00:27:08.547 "admin_qpairs": 0, 00:27:08.547 "io_qpairs": 0, 00:27:08.547 "current_admin_qpairs": 0, 00:27:08.547 "current_io_qpairs": 0, 00:27:08.547 "pending_bdev_io": 0, 00:27:08.547 "completed_nvme_io": 0, 00:27:08.547 "transports": [ 00:27:08.547 { 00:27:08.547 "trtype": "TCP" 00:27:08.547 } 00:27:08.547 ] 00:27:08.547 }, 00:27:08.547 { 00:27:08.547 "name": "nvmf_tgt_poll_group_003", 00:27:08.547 "admin_qpairs": 0, 00:27:08.547 "io_qpairs": 0, 00:27:08.547 "current_admin_qpairs": 0, 00:27:08.547 "current_io_qpairs": 0, 00:27:08.547 "pending_bdev_io": 0, 00:27:08.547 "completed_nvme_io": 0, 00:27:08.547 "transports": [ 00:27:08.547 { 00:27:08.547 "trtype": "TCP" 00:27:08.547 } 00:27:08.547 ] 00:27:08.547 } 00:27:08.547 ] 00:27:08.547 }' 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:08.547 10:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:08.547 10:47:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:08.547 10:47:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:08.547 10:47:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3889321 00:27:16.661 Initializing NVMe Controllers 00:27:16.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:16.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:16.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:16.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:16.661 Initialization complete. Launching workers. 00:27:16.661 ======================================================== 00:27:16.661 Latency(us) 00:27:16.661 Device Information : IOPS MiB/s Average min max 00:27:16.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3954.31 15.45 16245.39 2552.90 65253.57 00:27:16.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4312.59 16.85 14844.56 2553.22 62401.50 00:27:16.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12319.31 48.12 5194.77 2097.54 7722.70 00:27:16.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4221.10 16.49 15165.11 1346.21 62438.82 00:27:16.661 ======================================================== 00:27:16.661 Total : 24807.31 96.90 10330.31 1346.21 65253.57 00:27:16.661 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.661 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.661 rmmod nvme_tcp 00:27:16.661 rmmod nvme_fabrics 00:27:16.921 rmmod nvme_keyring 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3889205 ']' 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3889205 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3889205 ']' 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3889205 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3889205 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3889205' 00:27:16.921 killing process with pid 3889205 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3889205 00:27:16.921 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3889205 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.182 10:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.534 10:48:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.534 10:48:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:20.534 00:27:20.534 real 0m43.889s 00:27:20.534 user 2m36.751s 00:27:20.534 sys 0m10.445s 00:27:20.534 10:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.534 10:48:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.534 ************************************ 00:27:20.534 END TEST nvmf_perf_adq 00:27:20.534 ************************************ 00:27:20.534 10:48:08 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:20.534 10:48:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:20.534 10:48:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.534 10:48:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.534 ************************************ 00:27:20.534 START TEST nvmf_shutdown 00:27:20.534 ************************************ 00:27:20.534 10:48:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:20.534 * Looking for test storage... 00:27:20.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.535 ************************************ 00:27:20.535 START TEST nvmf_shutdown_tc1 00:27:20.535 ************************************ 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.535 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:21.913 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:21.914 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:21.914 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:21.914 Found net devices under 0000:08:00.0: cvl_0_0 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:21.914 Found net devices under 0000:08:00.1: cvl_0_1 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:27:21.914 00:27:21.914 --- 10.0.0.2 ping statistics --- 00:27:21.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.914 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:27:21.914 00:27:21.914 --- 10.0.0.1 ping statistics --- 00:27:21.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.914 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3892463 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3892463 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3892463 ']' 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.914 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.915 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.915 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.915 [2024-07-23 10:48:10.406840] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:21.915 [2024-07-23 10:48:10.406934] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.175 [2024-07-23 10:48:10.472066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.175 [2024-07-23 10:48:10.562456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.175 [2024-07-23 10:48:10.562527] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.175 [2024-07-23 10:48:10.562548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.175 [2024-07-23 10:48:10.562561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.175 [2024-07-23 10:48:10.562573] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.175 [2024-07-23 10:48:10.562662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.175 [2024-07-23 10:48:10.562744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.175 [2024-07-23 10:48:10.562828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:22.175 [2024-07-23 10:48:10.562832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 [2024-07-23 10:48:10.709135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.434 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.434 Malloc1 00:27:22.434 [2024-07-23 10:48:10.799724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.434 Malloc2 00:27:22.434 Malloc3 00:27:22.434 Malloc4 00:27:22.692 Malloc5 00:27:22.692 Malloc6 00:27:22.692 Malloc7 00:27:22.692 Malloc8 00:27:22.692 Malloc9 00:27:22.692 Malloc10 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3892613 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3892613 /var/tmp/bdevperf.sock 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3892613 ']' 00:27:22.950 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.951 "adrfam": "ipv4", 00:27:22.951 "trsvcid": "$NVMF_PORT", 00:27:22.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.951 "hdgst": ${hdgst:-false}, 00:27:22.951 "ddgst": ${ddgst:-false} 00:27:22.951 }, 00:27:22.951 "method": "bdev_nvme_attach_controller" 00:27:22.951 } 00:27:22.951 EOF 00:27:22.951 )") 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.951 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.951 { 00:27:22.951 "params": { 00:27:22.951 "name": "Nvme$subsystem", 00:27:22.951 "trtype": "$TEST_TRANSPORT", 00:27:22.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "$NVMF_PORT", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.952 "hdgst": ${hdgst:-false}, 00:27:22.952 "ddgst": ${ddgst:-false} 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 } 00:27:22.952 EOF 00:27:22.952 )") 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.952 { 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme$subsystem", 00:27:22.952 "trtype": "$TEST_TRANSPORT", 00:27:22.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "$NVMF_PORT", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.952 "hdgst": ${hdgst:-false}, 00:27:22.952 "ddgst": ${ddgst:-false} 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 } 00:27:22.952 EOF 00:27:22.952 )") 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:22.952 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme1", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme2", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme3", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme4", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme5", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme6", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme7", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme8", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme9", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 },{ 00:27:22.952 "params": { 00:27:22.952 "name": "Nvme10", 00:27:22.952 "trtype": "tcp", 00:27:22.952 "traddr": "10.0.0.2", 00:27:22.952 "adrfam": "ipv4", 00:27:22.952 "trsvcid": "4420", 00:27:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:22.952 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:22.952 "hdgst": false, 00:27:22.952 "ddgst": false 00:27:22.952 }, 00:27:22.952 "method": "bdev_nvme_attach_controller" 00:27:22.952 }' 00:27:22.952 [2024-07-23 10:48:11.268094] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:22.952 [2024-07-23 10:48:11.268179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:22.952 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.952 [2024-07-23 10:48:11.330960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.952 [2024-07-23 10:48:11.418564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3892613 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:24.857 10:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:26.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3892613 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3892463 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.234 { 00:27:26.234 "params": { 00:27:26.234 "name": "Nvme$subsystem", 00:27:26.234 "trtype": "$TEST_TRANSPORT", 00:27:26.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.234 "adrfam": "ipv4", 00:27:26.234 "trsvcid": "$NVMF_PORT", 00:27:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.234 "hdgst": ${hdgst:-false}, 00:27:26.234 "ddgst": ${ddgst:-false} 00:27:26.234 }, 00:27:26.234 "method": "bdev_nvme_attach_controller" 00:27:26.234 } 00:27:26.234 EOF 00:27:26.234 )") 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.234 { 00:27:26.234 "params": { 00:27:26.234 "name": "Nvme$subsystem", 00:27:26.234 "trtype": "$TEST_TRANSPORT", 00:27:26.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.234 "adrfam": "ipv4", 00:27:26.234 "trsvcid": "$NVMF_PORT", 00:27:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.234 "hdgst": ${hdgst:-false}, 00:27:26.234 "ddgst": ${ddgst:-false} 00:27:26.234 }, 00:27:26.234 "method": "bdev_nvme_attach_controller" 00:27:26.234 } 00:27:26.234 EOF 00:27:26.234 )") 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.234 { 00:27:26.234 "params": { 00:27:26.234 "name": "Nvme$subsystem", 00:27:26.234 "trtype": "$TEST_TRANSPORT", 00:27:26.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.234 "adrfam": "ipv4", 00:27:26.234 "trsvcid": "$NVMF_PORT", 00:27:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.234 "hdgst": ${hdgst:-false}, 00:27:26.234 "ddgst": ${ddgst:-false} 00:27:26.234 }, 00:27:26.234 "method": "bdev_nvme_attach_controller" 00:27:26.234 } 00:27:26.234 EOF 00:27:26.234 )") 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.234 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.234 { 00:27:26.234 "params": { 00:27:26.234 "name": "Nvme$subsystem", 00:27:26.234 "trtype": "$TEST_TRANSPORT", 00:27:26.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.234 "adrfam": "ipv4", 00:27:26.234 "trsvcid": "$NVMF_PORT", 00:27:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.234 "hdgst": ${hdgst:-false}, 00:27:26.234 "ddgst": ${ddgst:-false} 00:27:26.234 }, 00:27:26.234 "method": "bdev_nvme_attach_controller" 00:27:26.234 } 00:27:26.234 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.235 { 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme$subsystem", 00:27:26.235 "trtype": "$TEST_TRANSPORT", 00:27:26.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "$NVMF_PORT", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.235 "hdgst": ${hdgst:-false}, 00:27:26.235 "ddgst": ${ddgst:-false} 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 } 00:27:26.235 EOF 00:27:26.235 )") 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:26.235 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme1", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme2", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme3", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme4", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme5", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme6", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme7", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.235 "name": "Nvme8", 00:27:26.235 "trtype": "tcp", 00:27:26.235 "traddr": "10.0.0.2", 00:27:26.235 "adrfam": "ipv4", 00:27:26.235 "trsvcid": "4420", 00:27:26.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:26.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:26.235 "hdgst": false, 00:27:26.235 "ddgst": false 00:27:26.235 }, 00:27:26.235 "method": "bdev_nvme_attach_controller" 00:27:26.235 },{ 00:27:26.235 "params": { 00:27:26.236 "name": "Nvme9", 00:27:26.236 "trtype": "tcp", 00:27:26.236 "traddr": "10.0.0.2", 00:27:26.236 "adrfam": "ipv4", 00:27:26.236 "trsvcid": "4420", 00:27:26.236 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:26.236 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:26.236 "hdgst": false, 00:27:26.236 "ddgst": false 00:27:26.236 }, 00:27:26.236 "method": "bdev_nvme_attach_controller" 00:27:26.236 },{ 00:27:26.236 "params": { 00:27:26.236 "name": "Nvme10", 00:27:26.236 "trtype": "tcp", 00:27:26.236 "traddr": "10.0.0.2", 00:27:26.236 "adrfam": "ipv4", 00:27:26.236 "trsvcid": "4420", 00:27:26.236 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:26.236 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:26.236 "hdgst": false, 00:27:26.236 "ddgst": false 00:27:26.236 }, 00:27:26.236 "method": "bdev_nvme_attach_controller" 00:27:26.236 }' 00:27:26.236 [2024-07-23 10:48:14.360134] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:26.236 [2024-07-23 10:48:14.360224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892856 ] 00:27:26.236 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.236 [2024-07-23 10:48:14.424682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.236 [2024-07-23 10:48:14.512297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.625 Running I/O for 1 seconds... 00:27:29.011 00:27:29.011 Latency(us) 00:27:29.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme1n1 : 1.17 218.48 13.65 0.00 0.00 287010.70 17864.63 309135.74 00:27:29.011 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme2n1 : 1.16 166.19 10.39 0.00 0.00 372495.04 23010.42 355739.12 00:27:29.011 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme3n1 : 1.17 227.70 14.23 0.00 0.00 264966.56 7233.23 288940.94 00:27:29.011 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme4n1 : 1.15 167.36 10.46 0.00 0.00 355061.51 27185.30 312242.63 00:27:29.011 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme5n1 : 1.16 165.00 10.31 0.00 0.00 352866.10 21554.06 327777.09 00:27:29.011 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme6n1 : 1.14 168.93 10.56 0.00 0.00 336473.76 25631.86 312242.63 00:27:29.011 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme7n1 : 1.15 166.68 10.42 0.00 0.00 334150.42 21165.70 320009.86 00:27:29.011 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme8n1 : 1.21 215.65 13.48 0.00 0.00 252026.34 5631.24 302921.96 00:27:29.011 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme9n1 : 1.21 211.49 13.22 0.00 0.00 253955.60 22913.33 337097.77 00:27:29.011 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.011 Verification LBA range: start 0x0 length 0x400 00:27:29.011 Nvme10n1 : 1.22 266.64 16.66 0.00 0.00 197141.54 1019.45 296708.17 00:27:29.011 =================================================================================================================== 00:27:29.011 Total : 1974.11 123.38 0.00 0.00 290472.37 1019.45 355739.12 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.011 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.011 rmmod nvme_tcp 00:27:29.011 rmmod nvme_fabrics 00:27:29.269 rmmod nvme_keyring 00:27:29.269 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3892463 ']' 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3892463 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3892463 ']' 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3892463 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3892463 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3892463' 00:27:29.270 killing process with pid 3892463 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3892463 00:27:29.270 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3892463 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.528 10:48:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.066 00:27:32.066 real 0m11.317s 00:27:32.066 user 0m34.256s 00:27:32.066 sys 0m2.780s 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.066 ************************************ 00:27:32.066 END TEST nvmf_shutdown_tc1 00:27:32.066 ************************************ 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:32.066 ************************************ 00:27:32.066 START TEST nvmf_shutdown_tc2 00:27:32.066 ************************************ 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.066 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:32.066 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:32.066 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:32.067 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:32.067 Found net devices under 0000:08:00.0: cvl_0_0 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:32.067 Found net devices under 0000:08:00.1: cvl_0_1 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:27:32.067 00:27:32.067 --- 10.0.0.2 ping statistics --- 00:27:32.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.067 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:27:32.067 00:27:32.067 --- 10.0.0.1 ping statistics --- 00:27:32.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.067 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3893536 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3893536 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3893536 ']' 00:27:32.067 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.068 [2024-07-23 10:48:20.193019] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:32.068 [2024-07-23 10:48:20.193107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.068 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.068 [2024-07-23 10:48:20.245407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.068 [2024-07-23 10:48:20.318234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.068 [2024-07-23 10:48:20.318290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.068 [2024-07-23 10:48:20.318303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.068 [2024-07-23 10:48:20.318313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.068 [2024-07-23 10:48:20.318322] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.068 [2024-07-23 10:48:20.318396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.068 [2024-07-23 10:48:20.318449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.068 [2024-07-23 10:48:20.318512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:32.068 [2024-07-23 10:48:20.318515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.068 [2024-07-23 10:48:20.470776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.068 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.068 Malloc1 00:27:32.068 [2024-07-23 10:48:20.559796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.327 Malloc2 00:27:32.328 Malloc3 00:27:32.328 Malloc4 00:27:32.328 Malloc5 00:27:32.328 Malloc6 00:27:32.328 Malloc7 00:27:32.586 Malloc8 00:27:32.586 Malloc9 00:27:32.586 Malloc10 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3893595 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3893595 /var/tmp/bdevperf.sock 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3893595 ']' 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.586 "ddgst": ${ddgst:-false} 00:27:32.586 }, 00:27:32.586 "method": "bdev_nvme_attach_controller" 00:27:32.586 } 00:27:32.586 EOF 00:27:32.586 )") 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.586 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.586 { 00:27:32.586 "params": { 00:27:32.586 "name": "Nvme$subsystem", 00:27:32.586 "trtype": "$TEST_TRANSPORT", 00:27:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.586 "adrfam": "ipv4", 00:27:32.586 "trsvcid": "$NVMF_PORT", 00:27:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.586 "hdgst": ${hdgst:-false}, 00:27:32.587 "ddgst": ${ddgst:-false} 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 } 00:27:32.587 EOF 00:27:32.587 )") 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.587 { 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme$subsystem", 00:27:32.587 "trtype": "$TEST_TRANSPORT", 00:27:32.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "$NVMF_PORT", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.587 "hdgst": ${hdgst:-false}, 00:27:32.587 "ddgst": ${ddgst:-false} 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 } 00:27:32.587 EOF 00:27:32.587 )") 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.587 { 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme$subsystem", 00:27:32.587 "trtype": "$TEST_TRANSPORT", 00:27:32.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "$NVMF_PORT", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.587 "hdgst": ${hdgst:-false}, 00:27:32.587 "ddgst": ${ddgst:-false} 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 } 00:27:32.587 EOF 00:27:32.587 )") 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:32.587 10:48:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme1", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme2", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme3", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme4", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme5", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme6", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme7", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme8", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme9", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 },{ 00:27:32.587 "params": { 00:27:32.587 "name": "Nvme10", 00:27:32.587 "trtype": "tcp", 00:27:32.587 "traddr": "10.0.0.2", 00:27:32.587 "adrfam": "ipv4", 00:27:32.587 "trsvcid": "4420", 00:27:32.587 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:32.587 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:32.587 "hdgst": false, 00:27:32.587 "ddgst": false 00:27:32.587 }, 00:27:32.587 "method": "bdev_nvme_attach_controller" 00:27:32.587 }' 00:27:32.587 [2024-07-23 10:48:21.031298] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:32.587 [2024-07-23 10:48:21.031380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893595 ] 00:27:32.587 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.845 [2024-07-23 10:48:21.092331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.845 [2024-07-23 10:48:21.168104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.222 Running I/O for 10 seconds... 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=74 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 74 -ge 100 ']' 00:27:34.788 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=144 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 144 -ge 100 ']' 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3893595 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3893595 ']' 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3893595 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3893595 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3893595' 00:27:35.046 killing process with pid 3893595 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3893595 00:27:35.046 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3893595 00:27:35.305 Received shutdown signal, test time was about 1.010244 seconds 00:27:35.305 00:27:35.305 Latency(us) 00:27:35.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.305 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme1n1 : 1.01 253.63 15.85 0.00 0.00 247326.53 20000.62 276513.37 00:27:35.305 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme2n1 : 0.98 196.44 12.28 0.00 0.00 314871.91 23204.60 299815.06 00:27:35.305 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme3n1 : 0.96 199.07 12.44 0.00 0.00 304653.91 25049.32 326223.64 00:27:35.305 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme4n1 : 1.00 256.51 16.03 0.00 0.00 230745.88 29321.29 256318.58 00:27:35.305 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme5n1 : 0.98 200.45 12.53 0.00 0.00 287926.30 8398.32 302921.96 00:27:35.305 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme6n1 : 0.99 193.01 12.06 0.00 0.00 295667.99 25437.68 315349.52 00:27:35.305 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme7n1 : 0.95 203.05 12.69 0.00 0.00 272973.62 18835.53 318456.41 00:27:35.305 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme8n1 : 0.99 194.51 12.16 0.00 0.00 280354.01 18350.08 287387.50 00:27:35.305 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme9n1 : 1.00 191.70 11.98 0.00 0.00 278821.80 47185.92 293601.28 00:27:35.305 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.305 Verification LBA range: start 0x0 length 0x400 00:27:35.305 Nvme10n1 : 1.01 190.89 11.93 0.00 0.00 274233.27 27185.30 313796.08 00:27:35.305 =================================================================================================================== 00:27:35.305 Total : 2079.26 129.95 0.00 0.00 276303.32 8398.32 326223.64 00:27:35.565 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.501 rmmod nvme_tcp 00:27:36.501 rmmod nvme_fabrics 00:27:36.501 rmmod nvme_keyring 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3893536 ']' 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3893536 ']' 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3893536' 00:27:36.501 killing process with pid 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3893536 00:27:36.501 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3893536 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.761 10:48:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.305 00:27:39.305 real 0m7.276s 00:27:39.305 user 0m22.035s 00:27:39.305 sys 0m1.384s 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 ************************************ 00:27:39.305 END TEST nvmf_shutdown_tc2 00:27:39.305 ************************************ 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 ************************************ 00:27:39.305 START TEST nvmf_shutdown_tc3 00:27:39.305 ************************************ 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:39.305 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:39.305 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:39.305 Found net devices under 0000:08:00.0: cvl_0_0 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.305 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:39.306 Found net devices under 0000:08:00.1: cvl_0_1 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:27:39.306 00:27:39.306 --- 10.0.0.2 ping statistics --- 00:27:39.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.306 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:39.306 00:27:39.306 --- 10.0.0.1 ping statistics --- 00:27:39.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.306 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3894312 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3894312 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3894312 ']' 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.306 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.306 [2024-07-23 10:48:27.551548] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:39.306 [2024-07-23 10:48:27.551640] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.306 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.306 [2024-07-23 10:48:27.620631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.306 [2024-07-23 10:48:27.710471] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.306 [2024-07-23 10:48:27.710542] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.306 [2024-07-23 10:48:27.710559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.306 [2024-07-23 10:48:27.710572] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.306 [2024-07-23 10:48:27.710583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.306 [2024-07-23 10:48:27.710673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.306 [2024-07-23 10:48:27.710726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.306 [2024-07-23 10:48:27.714506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:39.306 [2024-07-23 10:48:27.714555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.566 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.566 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:39.566 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.566 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.566 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.567 [2024-07-23 10:48:27.858131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.567 10:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.567 Malloc1 00:27:39.567 [2024-07-23 10:48:27.948724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.567 Malloc2 00:27:39.567 Malloc3 00:27:39.567 Malloc4 00:27:39.861 Malloc5 00:27:39.861 Malloc6 00:27:39.861 Malloc7 00:27:39.861 Malloc8 00:27:39.861 Malloc9 00:27:39.861 Malloc10 00:27:39.861 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.861 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:39.861 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.861 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3894463 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3894463 /var/tmp/bdevperf.sock 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3894463 ']' 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.120 { 00:27:40.120 "params": { 00:27:40.120 "name": "Nvme$subsystem", 00:27:40.120 "trtype": "$TEST_TRANSPORT", 00:27:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.120 "adrfam": "ipv4", 00:27:40.120 "trsvcid": "$NVMF_PORT", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.120 "hdgst": ${hdgst:-false}, 00:27:40.120 "ddgst": ${ddgst:-false} 00:27:40.120 }, 00:27:40.120 "method": "bdev_nvme_attach_controller" 00:27:40.120 } 00:27:40.120 EOF 00:27:40.120 )") 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.120 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.121 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.121 { 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme$subsystem", 00:27:40.121 "trtype": "$TEST_TRANSPORT", 00:27:40.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "$NVMF_PORT", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.121 "hdgst": ${hdgst:-false}, 00:27:40.121 "ddgst": ${ddgst:-false} 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 } 00:27:40.121 EOF 00:27:40.121 )") 00:27:40.121 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:40.121 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:40.121 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:40.121 10:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme1", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme2", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme3", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme4", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme5", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme6", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme7", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme8", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme9", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 },{ 00:27:40.121 "params": { 00:27:40.121 "name": "Nvme10", 00:27:40.121 "trtype": "tcp", 00:27:40.121 "traddr": "10.0.0.2", 00:27:40.121 "adrfam": "ipv4", 00:27:40.121 "trsvcid": "4420", 00:27:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:40.121 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:40.121 "hdgst": false, 00:27:40.121 "ddgst": false 00:27:40.121 }, 00:27:40.121 "method": "bdev_nvme_attach_controller" 00:27:40.121 }' 00:27:40.121 [2024-07-23 10:48:28.426596] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:40.121 [2024-07-23 10:48:28.426682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894463 ] 00:27:40.121 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.121 [2024-07-23 10:48:28.488881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.121 [2024-07-23 10:48:28.576472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.025 Running I/O for 10 seconds... 00:27:42.025 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.025 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:42.025 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:42.025 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.025 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:42.284 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=89 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 89 -ge 100 ']' 00:27:42.542 10:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3894312 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3894312 ']' 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3894312 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3894312 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3894312' 00:27:42.815 killing process with pid 3894312 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3894312 00:27:42.815 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3894312 00:27:42.815 [2024-07-23 10:48:31.193046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.815 [2024-07-23 10:48:31.193635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.815 [2024-07-23 10:48:31.193653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-07-23 10:48:31.193635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.193791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-23 10:48:31.193877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.193895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.193951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.193965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-23 10:48:31.193980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.193999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.194001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.194103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with [2024-07-23 10:48:31.194120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1the state(5) to be set 00:27:42.816 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-23 10:48:31.194193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with [2024-07-23 10:48:31.194305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1the state(5) to be set 00:27:42.816 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.816 [2024-07-23 10:48:31.194323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with [2024-07-23 10:48:31.194323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.816 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.816 [2024-07-23 10:48:31.194339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.816 [2024-07-23 10:48:31.194350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-07-23 10:48:31.194353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.194368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with [2024-07-23 10:48:31.194595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.817 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4d50 is same with the state(5) to be set 00:27:42.817 [2024-07-23 10:48:31.194664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.194981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.194996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.817 [2024-07-23 10:48:31.195346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.817 [2024-07-23 10:48:31.195361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.195378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.818 [2024-07-23 10:48:31.195393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.195475] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23b39f0 was disconnected and freed. reset controller. 00:27:42.818 [2024-07-23 10:48:31.196267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2809210 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.818 [2024-07-23 10:48:31.196673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.818 [2024-07-23 10:48:31.196671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0980 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.196996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.818 [2024-07-23 10:48:31.197522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.197535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.197548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c27f0 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.198571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.819 [2024-07-23 10:48:31.198634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0980 (9): Bad file descriptor 00:27:42.819 [2024-07-23 10:48:31.200843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.819 [2024-07-23 10:48:31.200884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0980 with addr=10.0.0.2, port=4420 00:27:42.819 [2024-07-23 10:48:31.200903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0980 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201469] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.819 [2024-07-23 10:48:31.201516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0980 (9): Bad file descriptor 00:27:42.819 [2024-07-23 10:48:31.201877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.201987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with [2024-07-23 10:48:31.202682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error the state(5) to be set 00:27:42.819 state 00:27:42.819 [2024-07-23 10:48:31.202711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.819 [2024-07-23 10:48:31.202711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.819 [2024-07-23 10:48:31.202736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.819 [2024-07-23 10:48:31.202764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.202778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.202791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.202798] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.820 [2024-07-23 10:48:31.202805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.202819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2c90 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.203354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.820 [2024-07-23 10:48:31.205427] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.820 [2024-07-23 10:48:31.206606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2809210 (9): Bad file descriptor 00:27:42.820 [2024-07-23 10:48:31.206723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.206758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.206790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.206818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.206837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.206852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.206867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.206881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.206907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e8fa0 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.206971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.207009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.207024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.207039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.207054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.207069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.820 [2024-07-23 10:48:31.207083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.820 [2024-07-23 10:48:31.207098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f0950 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.207994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.820 [2024-07-23 10:48:31.208514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.208542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.208569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.208589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3150 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210160] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.821 [2024-07-23 10:48:31.210187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.821 [2024-07-23 10:48:31.210753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0980 with addr=10.0.0.2, port=4420 00:27:42.821 [2024-07-23 10:48:31.210782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0980 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.210963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0980 (9): Bad file descriptor 00:27:42.821 [2024-07-23 10:48:31.211127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.821 [2024-07-23 10:48:31.211465] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.821 [2024-07-23 10:48:31.211430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.821 [2024-07-23 10:48:31.211515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.821 [2024-07-23 10:48:31.211592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with [2024-07-23 10:48:31.211606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.821 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.821 [2024-07-23 10:48:31.211621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.821 [2024-07-23 10:48:31.211648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.821 [2024-07-23 10:48:31.211661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with [2024-07-23 10:48:31.211674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(5) to be set 00:27:42.821 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.821 [2024-07-23 10:48:31.211691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.821 [2024-07-23 10:48:31.211704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with [2024-07-23 10:48:31.211718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128the state(5) to be set 00:27:42.821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.821 [2024-07-23 10:48:31.211733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.821 [2024-07-23 10:48:31.211746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.821 [2024-07-23 10:48:31.211760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.821 [2024-07-23 10:48:31.211770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.821 [2024-07-23 10:48:31.211774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.211788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with [2024-07-23 10:48:31.211788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128the state(5) to be set 00:27:42.822 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c35f0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.211805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.211972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.211994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.212886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.822 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 10:48:31.212924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.212968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.822 [2024-07-23 10:48:31.212985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.212992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.822 [2024-07-23 10:48:31.213000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.822 [2024-07-23 10:48:31.213010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:12the state(5) to be set 00:27:42.823 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12[2024-07-23 10:48:31.213119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:12[2024-07-23 10:48:31.213268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:12the state(5) to be set 00:27:42.823 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12[2024-07-23 10:48:31.213443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:42.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:12the state(5) to be set 00:27:42.823 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with [2024-07-23 10:48:31.213643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:12the state(5) to be set 00:27:42.823 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.823 [2024-07-23 10:48:31.213660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.823 [2024-07-23 10:48:31.213673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.823 [2024-07-23 10:48:31.213680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.824 [2024-07-23 10:48:31.213687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.824 [2024-07-23 10:48:31.213708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c3a10 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213779] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27c3a10 was disconnected and freed. reset controller. 00:27:42.824 [2024-07-23 10:48:31.213793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.213819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ab0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.214069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.824 [2024-07-23 10:48:31.215458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c43f0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c43f0 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.215999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.824 [2024-07-23 10:48:31.216569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.216590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.216603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.216617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.216631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.217487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:42.825 [2024-07-23 10:48:31.217568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27effe0 (9): Bad file descriptor 00:27:42.825 [2024-07-23 10:48:31.217648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281e9f0 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.217830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.217969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.217983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2829150 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.218015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e8fa0 (9): Bad file descriptor 00:27:42.825 [2024-07-23 10:48:31.218048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f0950 (9): Bad file descriptor 00:27:42.825 [2024-07-23 10:48:31.218099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8060 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.218274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.825 [2024-07-23 10:48:31.218391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2610 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.218712] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2976c30 was disconnected and freed. reset controller. 00:27:42.825 [2024-07-23 10:48:31.218797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.218820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.218929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.218963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.218979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2978130 is same with the state(5) to be set 00:27:42.825 [2024-07-23 10:48:31.219037] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2978130 was disconnected and freed. reset controller. 00:27:42.825 [2024-07-23 10:48:31.220920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.220969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.220986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.825 [2024-07-23 10:48:31.221314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.825 [2024-07-23 10:48:31.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.221987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.222593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.222614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.241900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.826 [2024-07-23 10:48:31.241932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.826 [2024-07-23 10:48:31.241947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4890 is same with the state(5) to be set 00:27:42.826 [2024-07-23 10:48:31.250586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.826 [2024-07-23 10:48:31.250705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.826 [2024-07-23 10:48:31.250725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.250974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.250990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.251225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.251242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f10b0 is same with the state(5) to be set 00:27:42.827 [2024-07-23 10:48:31.253115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:42.827 [2024-07-23 10:48:31.253194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:42.827 [2024-07-23 10:48:31.253250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e2610 (9): Bad file descriptor 00:27:42.827 [2024-07-23 10:48:31.253584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-07-23 10:48:31.253634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27effe0 with addr=10.0.0.2, port=4420 00:27:42.827 [2024-07-23 10:48:31.253672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27effe0 is same with the state(5) to be set 00:27:42.827 [2024-07-23 10:48:31.253798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.827 [2024-07-23 10:48:31.253831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.253859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.827 [2024-07-23 10:48:31.253874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.253891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.827 [2024-07-23 10:48:31.253906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.253922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.827 [2024-07-23 10:48:31.253937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.253951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2808b90 is same with the state(5) to be set 00:27:42.827 [2024-07-23 10:48:31.253989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x281e9f0 (9): Bad file descriptor 00:27:42.827 [2024-07-23 10:48:31.254020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2829150 (9): Bad file descriptor 00:27:42.827 [2024-07-23 10:48:31.254067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8060 (9): Bad file descriptor 00:27:42.827 [2024-07-23 10:48:31.254102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27effe0 (9): Bad file descriptor 00:27:42.827 [2024-07-23 10:48:31.254243] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.827 [2024-07-23 10:48:31.254502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:42.827 [2024-07-23 10:48:31.254681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.827 [2024-07-23 10:48:31.254722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2809210 with addr=10.0.0.2, port=4420 00:27:42.827 [2024-07-23 10:48:31.254741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2809210 is same with the state(5) to be set 00:27:42.827 [2024-07-23 10:48:31.254819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.254842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.254889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.254909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.254925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.254942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.254965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.254984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.827 [2024-07-23 10:48:31.255305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.827 [2024-07-23 10:48:31.255323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.255967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.255983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.828 [2024-07-23 10:48:31.256553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.828 [2024-07-23 10:48:31.256568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.256978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.256997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.257014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.257032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.257048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.257065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4cb0 is same with the state(5) to be set 00:27:42.829 [2024-07-23 10:48:31.258557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.258971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.829 [2024-07-23 10:48:31.259360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.829 [2024-07-23 10:48:31.259378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.259978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.259994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.830 [2024-07-23 10:48:31.260734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.830 [2024-07-23 10:48:31.260750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.260767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.260783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.260801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29743f0 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.262389] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.831 [2024-07-23 10:48:31.263140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.831 [2024-07-23 10:48:31.263180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:42.831 [2024-07-23 10:48:31.263203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:42.831 [2024-07-23 10:48:31.263386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-07-23 10:48:31.263420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e2610 with addr=10.0.0.2, port=4420 00:27:42.831 [2024-07-23 10:48:31.263439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2610 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.263582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-07-23 10:48:31.263625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x281e9f0 with addr=10.0.0.2, port=4420 00:27:42.831 [2024-07-23 10:48:31.263644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281e9f0 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.263672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2809210 (9): Bad file descriptor 00:27:42.831 [2024-07-23 10:48:31.263692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:42.831 [2024-07-23 10:48:31.263707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:42.831 [2024-07-23 10:48:31.263724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:42.831 [2024-07-23 10:48:31.263791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2808b90 (9): Bad file descriptor 00:27:42.831 [2024-07-23 10:48:31.263854] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.831 [2024-07-23 10:48:31.264056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.831 [2024-07-23 10:48:31.264212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-07-23 10:48:31.264243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0980 with addr=10.0.0.2, port=4420 00:27:42.831 [2024-07-23 10:48:31.264261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0980 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.264383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-07-23 10:48:31.264409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e8fa0 with addr=10.0.0.2, port=4420 00:27:42.831 [2024-07-23 10:48:31.264426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e8fa0 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.264619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.831 [2024-07-23 10:48:31.264646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27f0950 with addr=10.0.0.2, port=4420 00:27:42.831 [2024-07-23 10:48:31.264663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f0950 is same with the state(5) to be set 00:27:42.831 [2024-07-23 10:48:31.264684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e2610 (9): Bad file descriptor 00:27:42.831 [2024-07-23 10:48:31.264706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x281e9f0 (9): Bad file descriptor 00:27:42.831 [2024-07-23 10:48:31.264723] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:42.831 [2024-07-23 10:48:31.264737] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:42.831 [2024-07-23 10:48:31.264770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:42.831 [2024-07-23 10:48:31.264799] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.831 [2024-07-23 10:48:31.265456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.265968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.265983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.831 [2024-07-23 10:48:31.266317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.831 [2024-07-23 10:48:31.266334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.266979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.832 [2024-07-23 10:48:31.267685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.832 [2024-07-23 10:48:31.267701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.267718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c4ee0 is same with the state(5) to be set 00:27:42.833 [2024-07-23 10:48:31.269233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.269975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.269991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.833 [2024-07-23 10:48:31.270416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.833 [2024-07-23 10:48:31.270432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.270967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.270983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.834 [2024-07-23 10:48:31.271457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.834 [2024-07-23 10:48:31.271474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2975730 is same with the state(5) to be set 00:27:42.834 [2024-07-23 10:48:31.273071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.834 [2024-07-23 10:48:31.273113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:42.834 [2024-07-23 10:48:31.273140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:42.834 [2024-07-23 10:48:31.273203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0980 (9): Bad file descriptor 00:27:42.834 [2024-07-23 10:48:31.273228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e8fa0 (9): Bad file descriptor 00:27:42.834 [2024-07-23 10:48:31.273249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f0950 (9): Bad file descriptor 00:27:42.834 [2024-07-23 10:48:31.273267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:42.834 [2024-07-23 10:48:31.273281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:42.834 [2024-07-23 10:48:31.273298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:42.834 [2024-07-23 10:48:31.273324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:42.834 [2024-07-23 10:48:31.273340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:42.834 [2024-07-23 10:48:31.273355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:42.834 [2024-07-23 10:48:31.273385] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.834 [2024-07-23 10:48:31.273413] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.834 [2024-07-23 10:48:31.273581] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:42.834 [2024-07-23 10:48:31.273618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.834 [2024-07-23 10:48:31.273637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.834 [2024-07-23 10:48:31.273836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.834 [2024-07-23 10:48:31.273867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b8060 with addr=10.0.0.2, port=4420 00:27:42.834 [2024-07-23 10:48:31.273886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8060 is same with the state(5) to be set 00:27:42.834 [2024-07-23 10:48:31.274024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.835 [2024-07-23 10:48:31.274051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2829150 with addr=10.0.0.2, port=4420 00:27:42.835 [2024-07-23 10:48:31.274068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2829150 is same with the state(5) to be set 00:27:42.835 [2024-07-23 10:48:31.274084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.835 [2024-07-23 10:48:31.274099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.835 [2024-07-23 10:48:31.274114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.835 [2024-07-23 10:48:31.274136] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:42.835 [2024-07-23 10:48:31.274151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:42.835 [2024-07-23 10:48:31.274166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:42.835 [2024-07-23 10:48:31.274184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:42.835 [2024-07-23 10:48:31.274199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:42.835 [2024-07-23 10:48:31.274213] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:42.835 [2024-07-23 10:48:31.274255] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.835 [2024-07-23 10:48:31.274279] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.835 [2024-07-23 10:48:31.274300] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:42.835 [2024-07-23 10:48:31.274982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.835 [2024-07-23 10:48:31.275007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.835 [2024-07-23 10:48:31.275022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.835 [2024-07-23 10:48:31.275052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8060 (9): Bad file descriptor 00:27:42.835 [2024-07-23 10:48:31.275076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2829150 (9): Bad file descriptor 00:27:42.835 [2024-07-23 10:48:31.275172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.275968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.275983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.835 [2024-07-23 10:48:31.276261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.835 [2024-07-23 10:48:31.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.276972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.276988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.836 [2024-07-23 10:48:31.277398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.836 [2024-07-23 10:48:31.277415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28efbf0 is same with the state(5) to be set 00:27:42.836 [2024-07-23 10:48:31.279163] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:42.836 task offset: 27136 on job bdev=Nvme1n1 fails 00:27:42.836 00:27:42.836 Latency(us) 00:27:42.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.836 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.836 Job: Nvme1n1 ended in about 1.05 seconds with error 00:27:42.836 Verification LBA range: start 0x0 length 0x400 00:27:42.836 Nvme1n1 : 1.05 183.25 11.45 61.08 0.00 258737.07 4805.97 309135.74 00:27:42.836 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.836 Job: Nvme2n1 ended in about 1.11 seconds with error 00:27:42.836 Verification LBA range: start 0x0 length 0x400 00:27:42.836 Nvme2n1 : 1.11 115.53 7.22 57.76 0.00 357496.35 27962.03 340204.66 00:27:42.836 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.836 Job: Nvme3n1 ended in about 1.11 seconds with error 00:27:42.836 Verification LBA range: start 0x0 length 0x400 00:27:42.836 Nvme3n1 : 1.11 172.71 10.79 57.57 0.00 263264.14 24175.50 295154.73 00:27:42.836 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.836 Job: Nvme4n1 ended in about 1.07 seconds with error 00:27:42.836 Verification LBA range: start 0x0 length 0x400 00:27:42.836 Nvme4n1 : 1.07 184.96 11.56 55.40 0.00 245607.92 14078.10 295154.73 00:27:42.836 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.836 Job: Nvme5n1 ended in about 1.12 seconds with error 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme5n1 : 1.12 114.43 7.15 57.22 0.00 338154.13 49516.09 284280.60 00:27:42.837 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.837 Job: Nvme6n1 ended in about 1.12 seconds with error 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme6n1 : 1.12 114.05 7.13 57.02 0.00 331790.66 25049.32 309135.74 00:27:42.837 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme7n1 : 1.07 191.32 11.96 0.00 0.00 284265.61 8252.68 296708.17 00:27:42.837 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.837 Job: Nvme8n1 ended in about 1.07 seconds with error 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme8n1 : 1.07 179.36 11.21 4.67 0.00 291148.05 2548.62 273406.48 00:27:42.837 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.837 Job: Nvme9n1 ended in about 1.13 seconds with error 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme9n1 : 1.13 113.45 7.09 56.72 0.00 311492.20 50875.35 310689.19 00:27:42.837 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:42.837 Job: Nvme10n1 ended in about 1.10 seconds with error 00:27:42.837 Verification LBA range: start 0x0 length 0x400 00:27:42.837 Nvme10n1 : 1.10 116.13 7.26 58.07 0.00 295296.57 19223.89 321563.31 00:27:42.837 =================================================================================================================== 00:27:42.837 Total : 1485.19 92.82 465.51 0.00 293859.29 2548.62 340204.66 00:27:43.097 [2024-07-23 10:48:31.306976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.097 [2024-07-23 10:48:31.307071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:43.097 [2024-07-23 10:48:31.307161] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.307183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.307203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:43.097 [2024-07-23 10:48:31.307231] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.307246] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.307260] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:43.097 [2024-07-23 10:48:31.307414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:43.097 [2024-07-23 10:48:31.307458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.097 [2024-07-23 10:48:31.307490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.097 [2024-07-23 10:48:31.307742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.307779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27effe0 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.307810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27effe0 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.307959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.307986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2808b90 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.308003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2808b90 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.308681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.308714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2809210 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.308732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2809210 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.308759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27effe0 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.308792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2808b90 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.309298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309346] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309400] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309425] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309442] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:43.097 [2024-07-23 10:48:31.309539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2809210 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.309563] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.309578] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.309595] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:43.097 [2024-07-23 10:48:31.309615] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.309630] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.309644] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:43.097 [2024-07-23 10:48:31.309708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.097 [2024-07-23 10:48:31.309729] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.097 [2024-07-23 10:48:31.309852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.309881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x281e9f0 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.309899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281e9f0 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e2610 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2610 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2829150 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2829150 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b8060 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8060 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27f0950 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f0950 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e8fa0 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e8fa0 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.097 [2024-07-23 10:48:31.310769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b0980 with addr=10.0.0.2, port=4420 00:27:43.097 [2024-07-23 10:48:31.310785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0980 is same with the state(5) to be set 00:27:43.097 [2024-07-23 10:48:31.310801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.310816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.310831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:43.097 [2024-07-23 10:48:31.310881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.097 [2024-07-23 10:48:31.310907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x281e9f0 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.310929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e2610 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.310949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2829150 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.310968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8060 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.310988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f0950 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.311007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e8fa0 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.311026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0980 (9): Bad file descriptor 00:27:43.097 [2024-07-23 10:48:31.311073] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:43.097 [2024-07-23 10:48:31.311134] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311149] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:43.097 [2024-07-23 10:48:31.311181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311196] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:43.097 [2024-07-23 10:48:31.311227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311241] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:43.097 [2024-07-23 10:48:31.311273] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:43.097 [2024-07-23 10:48:31.311322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:43.097 [2024-07-23 10:48:31.311337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:43.097 [2024-07-23 10:48:31.311351] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:43.098 [2024-07-23 10:48:31.311369] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:43.098 [2024-07-23 10:48:31.311384] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:43.098 [2024-07-23 10:48:31.311398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:43.098 [2024-07-23 10:48:31.311442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.098 [2024-07-23 10:48:31.311546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:43.356 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:43.356 10:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3894463 00:27:44.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3894463) - No such process 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.293 rmmod nvme_tcp 00:27:44.293 rmmod nvme_fabrics 00:27:44.293 rmmod nvme_keyring 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.293 10:48:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.232 10:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.232 00:27:46.232 real 0m7.391s 00:27:46.232 user 0m18.207s 00:27:46.232 sys 0m1.411s 00:27:46.232 10:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:46.232 10:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.232 ************************************ 00:27:46.232 END TEST nvmf_shutdown_tc3 00:27:46.232 ************************************ 00:27:46.490 10:48:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:46.490 00:27:46.490 real 0m26.208s 00:27:46.490 user 1m14.583s 00:27:46.490 sys 0m5.728s 00:27:46.490 10:48:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:46.490 10:48:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.490 ************************************ 00:27:46.490 END TEST nvmf_shutdown 00:27:46.490 ************************************ 00:27:46.490 10:48:34 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.490 10:48:34 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.490 10:48:34 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:46.490 10:48:34 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:46.490 10:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.490 ************************************ 00:27:46.490 START TEST nvmf_multicontroller 00:27:46.490 ************************************ 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:46.490 * Looking for test storage... 00:27:46.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.490 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.491 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:48.394 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:48.394 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:48.394 Found net devices under 0000:08:00.0: cvl_0_0 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:48.394 Found net devices under 0000:08:00.1: cvl_0_1 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:48.394 00:27:48.394 --- 10.0.0.2 ping statistics --- 00:27:48.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.394 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:48.394 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:48.395 00:27:48.395 --- 10.0.0.1 ping statistics --- 00:27:48.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.395 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3896429 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3896429 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3896429 ']' 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.395 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.395 [2024-07-23 10:48:36.657410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:48.395 [2024-07-23 10:48:36.657515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.395 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.395 [2024-07-23 10:48:36.722901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.395 [2024-07-23 10:48:36.809859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.395 [2024-07-23 10:48:36.809923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.395 [2024-07-23 10:48:36.809940] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.395 [2024-07-23 10:48:36.809962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.395 [2024-07-23 10:48:36.809974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.395 [2024-07-23 10:48:36.810061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.395 [2024-07-23 10:48:36.810111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.395 [2024-07-23 10:48:36.810115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 [2024-07-23 10:48:36.933200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 Malloc0 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 [2024-07-23 10:48:36.997362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 [2024-07-23 10:48:37.005271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 Malloc1 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3896451 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3896451 /var/tmp/bdevperf.sock 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3896451 ']' 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.653 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.654 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.654 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.913 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.913 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:48.913 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:48.913 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.913 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.173 NVMe0n1 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.173 1 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.173 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.173 request: 00:27:49.173 { 00:27:49.173 "name": "NVMe0", 00:27:49.173 "trtype": "tcp", 00:27:49.173 "traddr": "10.0.0.2", 00:27:49.173 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:49.173 "hostaddr": "10.0.0.2", 00:27:49.173 "hostsvcid": "60000", 00:27:49.173 "adrfam": "ipv4", 00:27:49.173 "trsvcid": "4420", 00:27:49.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.173 "method": "bdev_nvme_attach_controller", 00:27:49.173 "req_id": 1 00:27:49.173 } 00:27:49.173 Got JSON-RPC error response 00:27:49.173 response: 00:27:49.173 { 00:27:49.174 "code": -114, 00:27:49.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.174 } 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.174 request: 00:27:49.174 { 00:27:49.174 "name": "NVMe0", 00:27:49.174 "trtype": "tcp", 00:27:49.174 "traddr": "10.0.0.2", 00:27:49.174 "hostaddr": "10.0.0.2", 00:27:49.174 "hostsvcid": "60000", 00:27:49.174 "adrfam": "ipv4", 00:27:49.174 "trsvcid": "4420", 00:27:49.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:49.174 "method": "bdev_nvme_attach_controller", 00:27:49.174 "req_id": 1 00:27:49.174 } 00:27:49.174 Got JSON-RPC error response 00:27:49.174 response: 00:27:49.174 { 00:27:49.174 "code": -114, 00:27:49.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.174 } 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.174 request: 00:27:49.174 { 00:27:49.174 "name": "NVMe0", 00:27:49.174 "trtype": "tcp", 00:27:49.174 "traddr": "10.0.0.2", 00:27:49.174 "hostaddr": "10.0.0.2", 00:27:49.174 "hostsvcid": "60000", 00:27:49.174 "adrfam": "ipv4", 00:27:49.174 "trsvcid": "4420", 00:27:49.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.174 "multipath": "disable", 00:27:49.174 "method": "bdev_nvme_attach_controller", 00:27:49.174 "req_id": 1 00:27:49.174 } 00:27:49.174 Got JSON-RPC error response 00:27:49.174 response: 00:27:49.174 { 00:27:49.174 "code": -114, 00:27:49.174 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:49.174 } 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.174 request: 00:27:49.174 { 00:27:49.174 "name": "NVMe0", 00:27:49.174 "trtype": "tcp", 00:27:49.174 "traddr": "10.0.0.2", 00:27:49.174 "hostaddr": "10.0.0.2", 00:27:49.174 "hostsvcid": "60000", 00:27:49.174 "adrfam": "ipv4", 00:27:49.174 "trsvcid": "4420", 00:27:49.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.174 "multipath": "failover", 00:27:49.174 "method": "bdev_nvme_attach_controller", 00:27:49.174 "req_id": 1 00:27:49.174 } 00:27:49.174 Got JSON-RPC error response 00:27:49.174 response: 00:27:49.174 { 00:27:49.174 "code": -114, 00:27:49.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.174 } 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.174 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.174 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.433 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:49.433 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:50.809 0 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3896451 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3896451 ']' 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3896451 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.810 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3896451 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3896451' 00:27:50.810 killing process with pid 3896451 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3896451 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3896451 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:27:50.810 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:50.810 [2024-07-23 10:48:37.106293] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:50.810 [2024-07-23 10:48:37.106396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896451 ] 00:27:50.810 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.810 [2024-07-23 10:48:37.167694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.810 [2024-07-23 10:48:37.255155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.810 [2024-07-23 10:48:37.812705] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name c929a6ef-cd22-4d97-aa03-001600515a00 already exists 00:27:50.810 [2024-07-23 10:48:37.812750] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:c929a6ef-cd22-4d97-aa03-001600515a00 alias for bdev NVMe1n1 00:27:50.810 [2024-07-23 10:48:37.812770] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:50.810 Running I/O for 1 seconds... 00:27:50.810 00:27:50.810 Latency(us) 00:27:50.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.810 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:50.810 NVMe0n1 : 1.01 16493.77 64.43 0.00 0.00 7746.70 3325.35 13689.74 00:27:50.810 =================================================================================================================== 00:27:50.810 Total : 16493.77 64.43 0.00 0.00 7746.70 3325.35 13689.74 00:27:50.810 Received shutdown signal, test time was about 1.000000 seconds 00:27:50.810 00:27:50.810 Latency(us) 00:27:50.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.810 =================================================================================================================== 00:27:50.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.810 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.810 rmmod nvme_tcp 00:27:50.810 rmmod nvme_fabrics 00:27:50.810 rmmod nvme_keyring 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3896429 ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3896429 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3896429 ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3896429 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3896429 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3896429' 00:27:50.810 killing process with pid 3896429 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3896429 00:27:50.810 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3896429 00:27:51.068 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.069 10:48:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.606 10:48:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.606 00:27:53.606 real 0m6.731s 00:27:53.606 user 0m10.878s 00:27:53.606 sys 0m1.948s 00:27:53.606 10:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:53.606 10:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.606 ************************************ 00:27:53.606 END TEST nvmf_multicontroller 00:27:53.606 ************************************ 00:27:53.606 10:48:41 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:53.606 10:48:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:53.606 10:48:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.606 10:48:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.606 ************************************ 00:27:53.606 START TEST nvmf_aer 00:27:53.606 ************************************ 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:53.606 * Looking for test storage... 00:27:53.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.606 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.607 10:48:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:54.986 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:54.986 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:54.986 Found net devices under 0000:08:00.0: cvl_0_0 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:54.986 Found net devices under 0000:08:00.1: cvl_0_1 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:27:54.986 00:27:54.986 --- 10.0.0.2 ping statistics --- 00:27:54.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.986 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:27:54.986 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:54.986 00:27:54.986 --- 10.0.0.1 ping statistics --- 00:27:54.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.987 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3898156 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3898156 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3898156 ']' 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.987 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:54.987 [2024-07-23 10:48:43.449976] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:54.987 [2024-07-23 10:48:43.450072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.987 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.245 [2024-07-23 10:48:43.514701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.245 [2024-07-23 10:48:43.602865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.245 [2024-07-23 10:48:43.602926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.245 [2024-07-23 10:48:43.602943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.245 [2024-07-23 10:48:43.602956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.245 [2024-07-23 10:48:43.602967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.245 [2024-07-23 10:48:43.604147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.245 [2024-07-23 10:48:43.604281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.245 [2024-07-23 10:48:43.604308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.245 [2024-07-23 10:48:43.604311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.245 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.245 [2024-07-23 10:48:43.739999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 Malloc0 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 [2024-07-23 10:48:43.788767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.504 [ 00:27:55.504 { 00:27:55.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:55.504 "subtype": "Discovery", 00:27:55.504 "listen_addresses": [], 00:27:55.504 "allow_any_host": true, 00:27:55.504 "hosts": [] 00:27:55.504 }, 00:27:55.504 { 00:27:55.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.504 "subtype": "NVMe", 00:27:55.504 "listen_addresses": [ 00:27:55.504 { 00:27:55.504 "trtype": "TCP", 00:27:55.504 "adrfam": "IPv4", 00:27:55.504 "traddr": "10.0.0.2", 00:27:55.504 "trsvcid": "4420" 00:27:55.504 } 00:27:55.504 ], 00:27:55.504 "allow_any_host": true, 00:27:55.504 "hosts": [], 00:27:55.504 "serial_number": "SPDK00000000000001", 00:27:55.504 "model_number": "SPDK bdev Controller", 00:27:55.504 "max_namespaces": 2, 00:27:55.504 "min_cntlid": 1, 00:27:55.504 "max_cntlid": 65519, 00:27:55.504 "namespaces": [ 00:27:55.504 { 00:27:55.504 "nsid": 1, 00:27:55.504 "bdev_name": "Malloc0", 00:27:55.504 "name": "Malloc0", 00:27:55.504 "nguid": "0AC69F18E2FB45BA95EC70B1C6AE1958", 00:27:55.504 "uuid": "0ac69f18-e2fb-45ba-95ec-70b1c6ae1958" 00:27:55.504 } 00:27:55.504 ] 00:27:55.504 } 00:27:55.504 ] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3898189 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:55.504 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:27:55.504 10:48:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.763 Malloc1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.763 [ 00:27:55.763 { 00:27:55.763 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:55.763 "subtype": "Discovery", 00:27:55.763 "listen_addresses": [], 00:27:55.763 "allow_any_host": true, 00:27:55.763 "hosts": [] 00:27:55.763 }, 00:27:55.763 { 00:27:55.763 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.763 "subtype": "NVMe", 00:27:55.763 "listen_addresses": [ 00:27:55.763 { 00:27:55.763 "trtype": "TCP", 00:27:55.763 "adrfam": "IPv4", 00:27:55.763 "traddr": "10.0.0.2", 00:27:55.763 "trsvcid": "4420" 00:27:55.763 } 00:27:55.763 ], 00:27:55.763 "allow_any_host": true, 00:27:55.763 "hosts": [], 00:27:55.763 "serial_number": "SPDK00000000000001", 00:27:55.763 "model_number": "SPDK bdev Controller", 00:27:55.763 "max_namespaces": 2, 00:27:55.763 "min_cntlid": 1, 00:27:55.763 "max_cntlid": 65519, 00:27:55.763 "namespaces": [ 00:27:55.763 { 00:27:55.763 "nsid": 1, 00:27:55.763 "bdev_name": "Malloc0", 00:27:55.763 "name": "Malloc0", 00:27:55.763 "nguid": "0AC69F18E2FB45BA95EC70B1C6AE1958", 00:27:55.763 "uuid": "0ac69f18-e2fb-45ba-95ec-70b1c6ae1958" 00:27:55.763 }, 00:27:55.763 { 00:27:55.763 "nsid": 2, 00:27:55.763 "bdev_name": "Malloc1", 00:27:55.763 "name": "Malloc1", 00:27:55.763 "nguid": "4C816B8077E643F2ACCEA3ED51F937BB", 00:27:55.763 "uuid": "4c816b80-77e6-43f2-acce-a3ed51f937bb" 00:27:55.763 } 00:27:55.763 ] 00:27:55.763 } 00:27:55.763 ] 00:27:55.763 Asynchronous Event Request test 00:27:55.763 Attaching to 10.0.0.2 00:27:55.763 Attached to 10.0.0.2 00:27:55.763 Registering asynchronous event callbacks... 00:27:55.763 Starting namespace attribute notice tests for all controllers... 00:27:55.763 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:55.763 aer_cb - Changed Namespace 00:27:55.763 Cleaning up... 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3898189 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.763 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.764 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:55.764 rmmod nvme_tcp 00:27:55.764 rmmod nvme_fabrics 00:27:56.024 rmmod nvme_keyring 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3898156 ']' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3898156 ']' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3898156' 00:27:56.024 killing process with pid 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3898156 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.024 10:48:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.566 10:48:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.566 00:27:58.566 real 0m4.917s 00:27:58.566 user 0m4.062s 00:27:58.566 sys 0m1.636s 00:27:58.566 10:48:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:58.566 10:48:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:58.566 ************************************ 00:27:58.566 END TEST nvmf_aer 00:27:58.566 ************************************ 00:27:58.566 10:48:46 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:58.566 10:48:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:58.566 10:48:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:58.566 10:48:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:58.566 ************************************ 00:27:58.566 START TEST nvmf_async_init 00:27:58.566 ************************************ 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:58.566 * Looking for test storage... 00:27:58.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.566 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ccd8307cd83a461e9b9c95e52d5d7be6 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.567 10:48:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.946 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:59.946 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:59.947 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:59.947 Found net devices under 0000:08:00.0: cvl_0_0 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:59.947 Found net devices under 0000:08:00.1: cvl_0_1 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.947 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:28:00.206 00:28:00.206 --- 10.0.0.2 ping statistics --- 00:28:00.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.206 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:28:00.206 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:28:00.206 00:28:00.206 --- 10.0.0.1 ping statistics --- 00:28:00.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.206 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3899686 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3899686 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3899686 ']' 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:00.207 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.207 [2024-07-23 10:48:48.539775] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:00.207 [2024-07-23 10:48:48.539874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.207 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.207 [2024-07-23 10:48:48.604139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.207 [2024-07-23 10:48:48.690703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.207 [2024-07-23 10:48:48.690768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.207 [2024-07-23 10:48:48.690783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.207 [2024-07-23 10:48:48.690796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.207 [2024-07-23 10:48:48.690807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.207 [2024-07-23 10:48:48.690844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 [2024-07-23 10:48:48.821577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 null0 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ccd8307cd83a461e9b9c95e52d5d7be6 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.466 [2024-07-23 10:48:48.861797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.466 10:48:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.724 nvme0n1 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.724 [ 00:28:00.724 { 00:28:00.724 "name": "nvme0n1", 00:28:00.724 "aliases": [ 00:28:00.724 "ccd8307c-d83a-461e-9b9c-95e52d5d7be6" 00:28:00.724 ], 00:28:00.724 "product_name": "NVMe disk", 00:28:00.724 "block_size": 512, 00:28:00.724 "num_blocks": 2097152, 00:28:00.724 "uuid": "ccd8307c-d83a-461e-9b9c-95e52d5d7be6", 00:28:00.724 "assigned_rate_limits": { 00:28:00.724 "rw_ios_per_sec": 0, 00:28:00.724 "rw_mbytes_per_sec": 0, 00:28:00.724 "r_mbytes_per_sec": 0, 00:28:00.724 "w_mbytes_per_sec": 0 00:28:00.724 }, 00:28:00.724 "claimed": false, 00:28:00.724 "zoned": false, 00:28:00.724 "supported_io_types": { 00:28:00.724 "read": true, 00:28:00.724 "write": true, 00:28:00.724 "unmap": false, 00:28:00.724 "write_zeroes": true, 00:28:00.724 "flush": true, 00:28:00.724 "reset": true, 00:28:00.724 "compare": true, 00:28:00.724 "compare_and_write": true, 00:28:00.724 "abort": true, 00:28:00.724 "nvme_admin": true, 00:28:00.724 "nvme_io": true 00:28:00.724 }, 00:28:00.724 "memory_domains": [ 00:28:00.724 { 00:28:00.724 "dma_device_id": "system", 00:28:00.724 "dma_device_type": 1 00:28:00.724 } 00:28:00.724 ], 00:28:00.724 "driver_specific": { 00:28:00.724 "nvme": [ 00:28:00.724 { 00:28:00.724 "trid": { 00:28:00.724 "trtype": "TCP", 00:28:00.724 "adrfam": "IPv4", 00:28:00.724 "traddr": "10.0.0.2", 00:28:00.724 "trsvcid": "4420", 00:28:00.724 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.724 }, 00:28:00.724 "ctrlr_data": { 00:28:00.724 "cntlid": 1, 00:28:00.724 "vendor_id": "0x8086", 00:28:00.724 "model_number": "SPDK bdev Controller", 00:28:00.724 "serial_number": "00000000000000000000", 00:28:00.724 "firmware_revision": "24.05.1", 00:28:00.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.724 "oacs": { 00:28:00.724 "security": 0, 00:28:00.724 "format": 0, 00:28:00.724 "firmware": 0, 00:28:00.724 "ns_manage": 0 00:28:00.724 }, 00:28:00.724 "multi_ctrlr": true, 00:28:00.724 "ana_reporting": false 00:28:00.724 }, 00:28:00.724 "vs": { 00:28:00.724 "nvme_version": "1.3" 00:28:00.724 }, 00:28:00.724 "ns_data": { 00:28:00.724 "id": 1, 00:28:00.724 "can_share": true 00:28:00.724 } 00:28:00.724 } 00:28:00.724 ], 00:28:00.724 "mp_policy": "active_passive" 00:28:00.724 } 00:28:00.724 } 00:28:00.724 ] 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.724 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.724 [2024-07-23 10:48:49.114474] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:00.724 [2024-07-23 10:48:49.114579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f91840 (9): Bad file descriptor 00:28:00.983 [2024-07-23 10:48:49.256635] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.983 [ 00:28:00.983 { 00:28:00.983 "name": "nvme0n1", 00:28:00.983 "aliases": [ 00:28:00.983 "ccd8307c-d83a-461e-9b9c-95e52d5d7be6" 00:28:00.983 ], 00:28:00.983 "product_name": "NVMe disk", 00:28:00.983 "block_size": 512, 00:28:00.983 "num_blocks": 2097152, 00:28:00.983 "uuid": "ccd8307c-d83a-461e-9b9c-95e52d5d7be6", 00:28:00.983 "assigned_rate_limits": { 00:28:00.983 "rw_ios_per_sec": 0, 00:28:00.983 "rw_mbytes_per_sec": 0, 00:28:00.983 "r_mbytes_per_sec": 0, 00:28:00.983 "w_mbytes_per_sec": 0 00:28:00.983 }, 00:28:00.983 "claimed": false, 00:28:00.983 "zoned": false, 00:28:00.983 "supported_io_types": { 00:28:00.983 "read": true, 00:28:00.983 "write": true, 00:28:00.983 "unmap": false, 00:28:00.983 "write_zeroes": true, 00:28:00.983 "flush": true, 00:28:00.983 "reset": true, 00:28:00.983 "compare": true, 00:28:00.983 "compare_and_write": true, 00:28:00.983 "abort": true, 00:28:00.983 "nvme_admin": true, 00:28:00.983 "nvme_io": true 00:28:00.983 }, 00:28:00.983 "memory_domains": [ 00:28:00.983 { 00:28:00.983 "dma_device_id": "system", 00:28:00.983 "dma_device_type": 1 00:28:00.983 } 00:28:00.983 ], 00:28:00.983 "driver_specific": { 00:28:00.983 "nvme": [ 00:28:00.983 { 00:28:00.983 "trid": { 00:28:00.983 "trtype": "TCP", 00:28:00.983 "adrfam": "IPv4", 00:28:00.983 "traddr": "10.0.0.2", 00:28:00.983 "trsvcid": "4420", 00:28:00.983 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.983 }, 00:28:00.983 "ctrlr_data": { 00:28:00.983 "cntlid": 2, 00:28:00.983 "vendor_id": "0x8086", 00:28:00.983 "model_number": "SPDK bdev Controller", 00:28:00.983 "serial_number": "00000000000000000000", 00:28:00.983 "firmware_revision": "24.05.1", 00:28:00.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.983 "oacs": { 00:28:00.983 "security": 0, 00:28:00.983 "format": 0, 00:28:00.983 "firmware": 0, 00:28:00.983 "ns_manage": 0 00:28:00.983 }, 00:28:00.983 "multi_ctrlr": true, 00:28:00.983 "ana_reporting": false 00:28:00.983 }, 00:28:00.983 "vs": { 00:28:00.983 "nvme_version": "1.3" 00:28:00.983 }, 00:28:00.983 "ns_data": { 00:28:00.983 "id": 1, 00:28:00.983 "can_share": true 00:28:00.983 } 00:28:00.983 } 00:28:00.983 ], 00:28:00.983 "mp_policy": "active_passive" 00:28:00.983 } 00:28:00.983 } 00:28:00.983 ] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Egvh77JjGl 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Egvh77JjGl 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.983 [2024-07-23 10:48:49.311148] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:00.983 [2024-07-23 10:48:49.311295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Egvh77JjGl 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.983 [2024-07-23 10:48:49.319151] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Egvh77JjGl 00:28:00.983 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.984 [2024-07-23 10:48:49.327172] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:00.984 [2024-07-23 10:48:49.327233] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:00.984 nvme0n1 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.984 [ 00:28:00.984 { 00:28:00.984 "name": "nvme0n1", 00:28:00.984 "aliases": [ 00:28:00.984 "ccd8307c-d83a-461e-9b9c-95e52d5d7be6" 00:28:00.984 ], 00:28:00.984 "product_name": "NVMe disk", 00:28:00.984 "block_size": 512, 00:28:00.984 "num_blocks": 2097152, 00:28:00.984 "uuid": "ccd8307c-d83a-461e-9b9c-95e52d5d7be6", 00:28:00.984 "assigned_rate_limits": { 00:28:00.984 "rw_ios_per_sec": 0, 00:28:00.984 "rw_mbytes_per_sec": 0, 00:28:00.984 "r_mbytes_per_sec": 0, 00:28:00.984 "w_mbytes_per_sec": 0 00:28:00.984 }, 00:28:00.984 "claimed": false, 00:28:00.984 "zoned": false, 00:28:00.984 "supported_io_types": { 00:28:00.984 "read": true, 00:28:00.984 "write": true, 00:28:00.984 "unmap": false, 00:28:00.984 "write_zeroes": true, 00:28:00.984 "flush": true, 00:28:00.984 "reset": true, 00:28:00.984 "compare": true, 00:28:00.984 "compare_and_write": true, 00:28:00.984 "abort": true, 00:28:00.984 "nvme_admin": true, 00:28:00.984 "nvme_io": true 00:28:00.984 }, 00:28:00.984 "memory_domains": [ 00:28:00.984 { 00:28:00.984 "dma_device_id": "system", 00:28:00.984 "dma_device_type": 1 00:28:00.984 } 00:28:00.984 ], 00:28:00.984 "driver_specific": { 00:28:00.984 "nvme": [ 00:28:00.984 { 00:28:00.984 "trid": { 00:28:00.984 "trtype": "TCP", 00:28:00.984 "adrfam": "IPv4", 00:28:00.984 "traddr": "10.0.0.2", 00:28:00.984 "trsvcid": "4421", 00:28:00.984 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.984 }, 00:28:00.984 "ctrlr_data": { 00:28:00.984 "cntlid": 3, 00:28:00.984 "vendor_id": "0x8086", 00:28:00.984 "model_number": "SPDK bdev Controller", 00:28:00.984 "serial_number": "00000000000000000000", 00:28:00.984 "firmware_revision": "24.05.1", 00:28:00.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.984 "oacs": { 00:28:00.984 "security": 0, 00:28:00.984 "format": 0, 00:28:00.984 "firmware": 0, 00:28:00.984 "ns_manage": 0 00:28:00.984 }, 00:28:00.984 "multi_ctrlr": true, 00:28:00.984 "ana_reporting": false 00:28:00.984 }, 00:28:00.984 "vs": { 00:28:00.984 "nvme_version": "1.3" 00:28:00.984 }, 00:28:00.984 "ns_data": { 00:28:00.984 "id": 1, 00:28:00.984 "can_share": true 00:28:00.984 } 00:28:00.984 } 00:28:00.984 ], 00:28:00.984 "mp_policy": "active_passive" 00:28:00.984 } 00:28:00.984 } 00:28:00.984 ] 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Egvh77JjGl 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.984 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.984 rmmod nvme_tcp 00:28:00.984 rmmod nvme_fabrics 00:28:00.984 rmmod nvme_keyring 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3899686 ']' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3899686 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3899686 ']' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3899686 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3899686 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3899686' 00:28:01.245 killing process with pid 3899686 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3899686 00:28:01.245 [2024-07-23 10:48:49.519945] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:01.245 [2024-07-23 10:48:49.519991] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3899686 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.245 10:48:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.784 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.784 00:28:03.784 real 0m5.124s 00:28:03.784 user 0m1.966s 00:28:03.784 sys 0m1.572s 00:28:03.784 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.784 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 ************************************ 00:28:03.784 END TEST nvmf_async_init 00:28:03.784 ************************************ 00:28:03.784 10:48:51 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:03.784 10:48:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.784 10:48:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.784 10:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 ************************************ 00:28:03.784 START TEST dma 00:28:03.784 ************************************ 00:28:03.784 10:48:51 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:03.784 * Looking for test storage... 00:28:03.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.784 10:48:51 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.784 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.784 10:48:51 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.784 10:48:51 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.784 10:48:51 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.784 10:48:51 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.784 10:48:51 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:03.785 10:48:51 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.785 10:48:51 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.785 10:48:51 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:03.785 10:48:51 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:03.785 00:28:03.785 real 0m0.078s 00:28:03.785 user 0m0.039s 00:28:03.785 sys 0m0.044s 00:28:03.785 10:48:51 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.785 10:48:51 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:03.785 ************************************ 00:28:03.785 END TEST dma 00:28:03.785 ************************************ 00:28:03.785 10:48:51 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:03.785 10:48:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.785 10:48:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.785 10:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.785 ************************************ 00:28:03.785 START TEST nvmf_identify 00:28:03.785 ************************************ 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:03.785 * Looking for test storage... 00:28:03.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.785 10:48:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:05.163 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:05.163 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:05.163 Found net devices under 0000:08:00.0: cvl_0_0 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:05.163 Found net devices under 0000:08:00.1: cvl_0_1 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.163 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:05.424 00:28:05.424 --- 10.0.0.2 ping statistics --- 00:28:05.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.424 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:28:05.424 00:28:05.424 --- 10.0.0.1 ping statistics --- 00:28:05.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.424 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3901343 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3901343 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3901343 ']' 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.424 10:48:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 [2024-07-23 10:48:53.782975] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.424 [2024-07-23 10:48:53.783072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.424 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.424 [2024-07-23 10:48:53.849199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.683 [2024-07-23 10:48:53.942158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.683 [2024-07-23 10:48:53.942224] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.683 [2024-07-23 10:48:53.942239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.683 [2024-07-23 10:48:53.942252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.683 [2024-07-23 10:48:53.942264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.683 [2024-07-23 10:48:53.942345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.683 [2024-07-23 10:48:53.942445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.683 [2024-07-23 10:48:53.942447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.683 [2024-07-23 10:48:53.942396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 [2024-07-23 10:48:54.059093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 Malloc0 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.683 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 [2024-07-23 10:48:54.136706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.684 [ 00:28:05.684 { 00:28:05.684 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.684 "subtype": "Discovery", 00:28:05.684 "listen_addresses": [ 00:28:05.684 { 00:28:05.684 "trtype": "TCP", 00:28:05.684 "adrfam": "IPv4", 00:28:05.684 "traddr": "10.0.0.2", 00:28:05.684 "trsvcid": "4420" 00:28:05.684 } 00:28:05.684 ], 00:28:05.684 "allow_any_host": true, 00:28:05.684 "hosts": [] 00:28:05.684 }, 00:28:05.684 { 00:28:05.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.684 "subtype": "NVMe", 00:28:05.684 "listen_addresses": [ 00:28:05.684 { 00:28:05.684 "trtype": "TCP", 00:28:05.684 "adrfam": "IPv4", 00:28:05.684 "traddr": "10.0.0.2", 00:28:05.684 "trsvcid": "4420" 00:28:05.684 } 00:28:05.684 ], 00:28:05.684 "allow_any_host": true, 00:28:05.684 "hosts": [], 00:28:05.684 "serial_number": "SPDK00000000000001", 00:28:05.684 "model_number": "SPDK bdev Controller", 00:28:05.684 "max_namespaces": 32, 00:28:05.684 "min_cntlid": 1, 00:28:05.684 "max_cntlid": 65519, 00:28:05.684 "namespaces": [ 00:28:05.684 { 00:28:05.684 "nsid": 1, 00:28:05.684 "bdev_name": "Malloc0", 00:28:05.684 "name": "Malloc0", 00:28:05.684 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:05.684 "eui64": "ABCDEF0123456789", 00:28:05.684 "uuid": "dabf9f3d-a538-4ff3-b01a-a2f8a6248d43" 00:28:05.684 } 00:28:05.684 ] 00:28:05.684 } 00:28:05.684 ] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.684 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:05.684 [2024-07-23 10:48:54.176743] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.684 [2024-07-23 10:48:54.176794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901372 ] 00:28:05.945 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.945 [2024-07-23 10:48:54.217307] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:05.945 [2024-07-23 10:48:54.217371] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:05.945 [2024-07-23 10:48:54.217381] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:05.945 [2024-07-23 10:48:54.217397] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:05.945 [2024-07-23 10:48:54.217412] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:05.945 [2024-07-23 10:48:54.217596] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:05.945 [2024-07-23 10:48:54.217647] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbb3030 0 00:28:05.945 [2024-07-23 10:48:54.228498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:05.945 [2024-07-23 10:48:54.228519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:05.945 [2024-07-23 10:48:54.228528] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:05.945 [2024-07-23 10:48:54.228535] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:05.945 [2024-07-23 10:48:54.228583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.228595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.228604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.228623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:05.945 [2024-07-23 10:48:54.228650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.236506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.236524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.236532] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.236557] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:05.945 [2024-07-23 10:48:54.236568] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:05.945 [2024-07-23 10:48:54.236579] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:05.945 [2024-07-23 10:48:54.236604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.236634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.236658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.236755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.236768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.236775] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.236797] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:05.945 [2024-07-23 10:48:54.236812] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:05.945 [2024-07-23 10:48:54.236825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.236840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.236852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.236874] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.236969] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.236988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.236996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237004] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.237014] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:05.945 [2024-07-23 10:48:54.237030] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.237043] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237058] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.237070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.237092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.237177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.237192] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.237199] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237206] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.237216] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.237234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237251] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.237262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.237285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.237369] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.237382] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.237389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237397] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.237406] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:05.945 [2024-07-23 10:48:54.237415] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.237429] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.237541] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:05.945 [2024-07-23 10:48:54.237552] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.237566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237575] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237582] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.237594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.237621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.237958] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.237971] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.237978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.237986] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.237995] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:05.945 [2024-07-23 10:48:54.238012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.238021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.238028] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.945 [2024-07-23 10:48:54.238040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.945 [2024-07-23 10:48:54.238061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.945 [2024-07-23 10:48:54.238145] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.945 [2024-07-23 10:48:54.238157] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.945 [2024-07-23 10:48:54.238165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.945 [2024-07-23 10:48:54.238172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.945 [2024-07-23 10:48:54.238181] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:05.945 [2024-07-23 10:48:54.238191] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:05.945 [2024-07-23 10:48:54.238206] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:05.945 [2024-07-23 10:48:54.238221] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:05.945 [2024-07-23 10:48:54.238240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.238249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.238261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.946 [2024-07-23 10:48:54.238282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.946 [2024-07-23 10:48:54.238416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.946 [2024-07-23 10:48:54.238429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.946 [2024-07-23 10:48:54.238437] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.238444] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb3030): datao=0, datal=4096, cccid=0 00:28:05.946 [2024-07-23 10:48:54.238453] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0c100) on tqpair(0xbb3030): expected_datao=0, payload_size=4096 00:28:05.946 [2024-07-23 10:48:54.238462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.238487] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.238499] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.946 [2024-07-23 10:48:54.279566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.946 [2024-07-23 10:48:54.279580] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.946 [2024-07-23 10:48:54.279607] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:05.946 [2024-07-23 10:48:54.279620] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:05.946 [2024-07-23 10:48:54.279629] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:05.946 [2024-07-23 10:48:54.279639] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:05.946 [2024-07-23 10:48:54.279648] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:05.946 [2024-07-23 10:48:54.279657] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:05.946 [2024-07-23 10:48:54.279674] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:05.946 [2024-07-23 10:48:54.279688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.279718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.946 [2024-07-23 10:48:54.279742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.946 [2024-07-23 10:48:54.279831] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.946 [2024-07-23 10:48:54.279843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.946 [2024-07-23 10:48:54.279851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c100) on tqpair=0xbb3030 00:28:05.946 [2024-07-23 10:48:54.279873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.279900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.946 [2024-07-23 10:48:54.279911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.279936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.946 [2024-07-23 10:48:54.279946] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.279971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.946 [2024-07-23 10:48:54.279982] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279990] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.279997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.280007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.946 [2024-07-23 10:48:54.280021] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:05.946 [2024-07-23 10:48:54.280041] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:05.946 [2024-07-23 10:48:54.280054] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.280062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.280074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.946 [2024-07-23 10:48:54.280098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c100, cid 0, qid 0 00:28:05.946 [2024-07-23 10:48:54.280110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c260, cid 1, qid 0 00:28:05.946 [2024-07-23 10:48:54.280118] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c3c0, cid 2, qid 0 00:28:05.946 [2024-07-23 10:48:54.280127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.946 [2024-07-23 10:48:54.280136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c680, cid 4, qid 0 00:28:05.946 [2024-07-23 10:48:54.280257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.946 [2024-07-23 10:48:54.280270] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.946 [2024-07-23 10:48:54.280277] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.280284] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c680) on tqpair=0xbb3030 00:28:05.946 [2024-07-23 10:48:54.280294] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:05.946 [2024-07-23 10:48:54.280304] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:05.946 [2024-07-23 10:48:54.280323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.280332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.280343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.946 [2024-07-23 10:48:54.280365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c680, cid 4, qid 0 00:28:05.946 [2024-07-23 10:48:54.280463] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.946 [2024-07-23 10:48:54.280478] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.946 [2024-07-23 10:48:54.284501] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284510] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb3030): datao=0, datal=4096, cccid=4 00:28:05.946 [2024-07-23 10:48:54.284519] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0c680) on tqpair(0xbb3030): expected_datao=0, payload_size=4096 00:28:05.946 [2024-07-23 10:48:54.284527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284547] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284557] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284570] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.946 [2024-07-23 10:48:54.284581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.946 [2024-07-23 10:48:54.284588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284596] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c680) on tqpair=0xbb3030 00:28:05.946 [2024-07-23 10:48:54.284617] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:05.946 [2024-07-23 10:48:54.284658] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.284682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.946 [2024-07-23 10:48:54.284695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb3030) 00:28:05.946 [2024-07-23 10:48:54.284720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.946 [2024-07-23 10:48:54.284749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c680, cid 4, qid 0 00:28:05.946 [2024-07-23 10:48:54.284761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c7e0, cid 5, qid 0 00:28:05.946 [2024-07-23 10:48:54.284897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.946 [2024-07-23 10:48:54.284910] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.946 [2024-07-23 10:48:54.284917] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284924] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb3030): datao=0, datal=1024, cccid=4 00:28:05.946 [2024-07-23 10:48:54.284933] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0c680) on tqpair(0xbb3030): expected_datao=0, payload_size=1024 00:28:05.946 [2024-07-23 10:48:54.284941] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284952] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284960] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.946 [2024-07-23 10:48:54.284970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.946 [2024-07-23 10:48:54.284980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.947 [2024-07-23 10:48:54.284987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.284995] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c7e0) on tqpair=0xbb3030 00:28:05.947 [2024-07-23 10:48:54.325557] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.947 [2024-07-23 10:48:54.325576] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.947 [2024-07-23 10:48:54.325584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c680) on tqpair=0xbb3030 00:28:05.947 [2024-07-23 10:48:54.325617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325627] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb3030) 00:28:05.947 [2024-07-23 10:48:54.325640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.947 [2024-07-23 10:48:54.325671] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c680, cid 4, qid 0 00:28:05.947 [2024-07-23 10:48:54.325787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.947 [2024-07-23 10:48:54.325803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.947 [2024-07-23 10:48:54.325810] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325817] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb3030): datao=0, datal=3072, cccid=4 00:28:05.947 [2024-07-23 10:48:54.325826] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0c680) on tqpair(0xbb3030): expected_datao=0, payload_size=3072 00:28:05.947 [2024-07-23 10:48:54.325835] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325846] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325862] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.947 [2024-07-23 10:48:54.325887] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.947 [2024-07-23 10:48:54.325894] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325902] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c680) on tqpair=0xbb3030 00:28:05.947 [2024-07-23 10:48:54.325919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.325928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb3030) 00:28:05.947 [2024-07-23 10:48:54.325939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.947 [2024-07-23 10:48:54.325969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c680, cid 4, qid 0 00:28:05.947 [2024-07-23 10:48:54.326078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.947 [2024-07-23 10:48:54.326092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.947 [2024-07-23 10:48:54.326100] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.326107] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb3030): datao=0, datal=8, cccid=4 00:28:05.947 [2024-07-23 10:48:54.326115] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0c680) on tqpair(0xbb3030): expected_datao=0, payload_size=8 00:28:05.947 [2024-07-23 10:48:54.326124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.326135] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.326143] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.367558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.947 [2024-07-23 10:48:54.367578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.947 [2024-07-23 10:48:54.367586] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.947 [2024-07-23 10:48:54.367594] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c680) on tqpair=0xbb3030 00:28:05.947 ===================================================== 00:28:05.947 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:05.947 ===================================================== 00:28:05.947 Controller Capabilities/Features 00:28:05.947 ================================ 00:28:05.947 Vendor ID: 0000 00:28:05.947 Subsystem Vendor ID: 0000 00:28:05.947 Serial Number: .................... 00:28:05.947 Model Number: ........................................ 00:28:05.947 Firmware Version: 24.05.1 00:28:05.947 Recommended Arb Burst: 0 00:28:05.947 IEEE OUI Identifier: 00 00 00 00:28:05.947 Multi-path I/O 00:28:05.947 May have multiple subsystem ports: No 00:28:05.947 May have multiple controllers: No 00:28:05.947 Associated with SR-IOV VF: No 00:28:05.947 Max Data Transfer Size: 131072 00:28:05.947 Max Number of Namespaces: 0 00:28:05.947 Max Number of I/O Queues: 1024 00:28:05.947 NVMe Specification Version (VS): 1.3 00:28:05.947 NVMe Specification Version (Identify): 1.3 00:28:05.947 Maximum Queue Entries: 128 00:28:05.947 Contiguous Queues Required: Yes 00:28:05.947 Arbitration Mechanisms Supported 00:28:05.947 Weighted Round Robin: Not Supported 00:28:05.947 Vendor Specific: Not Supported 00:28:05.947 Reset Timeout: 15000 ms 00:28:05.947 Doorbell Stride: 4 bytes 00:28:05.947 NVM Subsystem Reset: Not Supported 00:28:05.947 Command Sets Supported 00:28:05.947 NVM Command Set: Supported 00:28:05.947 Boot Partition: Not Supported 00:28:05.947 Memory Page Size Minimum: 4096 bytes 00:28:05.947 Memory Page Size Maximum: 4096 bytes 00:28:05.947 Persistent Memory Region: Not Supported 00:28:05.947 Optional Asynchronous Events Supported 00:28:05.947 Namespace Attribute Notices: Not Supported 00:28:05.947 Firmware Activation Notices: Not Supported 00:28:05.947 ANA Change Notices: Not Supported 00:28:05.947 PLE Aggregate Log Change Notices: Not Supported 00:28:05.947 LBA Status Info Alert Notices: Not Supported 00:28:05.947 EGE Aggregate Log Change Notices: Not Supported 00:28:05.947 Normal NVM Subsystem Shutdown event: Not Supported 00:28:05.947 Zone Descriptor Change Notices: Not Supported 00:28:05.947 Discovery Log Change Notices: Supported 00:28:05.947 Controller Attributes 00:28:05.947 128-bit Host Identifier: Not Supported 00:28:05.947 Non-Operational Permissive Mode: Not Supported 00:28:05.947 NVM Sets: Not Supported 00:28:05.947 Read Recovery Levels: Not Supported 00:28:05.947 Endurance Groups: Not Supported 00:28:05.947 Predictable Latency Mode: Not Supported 00:28:05.947 Traffic Based Keep ALive: Not Supported 00:28:05.947 Namespace Granularity: Not Supported 00:28:05.947 SQ Associations: Not Supported 00:28:05.947 UUID List: Not Supported 00:28:05.947 Multi-Domain Subsystem: Not Supported 00:28:05.947 Fixed Capacity Management: Not Supported 00:28:05.947 Variable Capacity Management: Not Supported 00:28:05.947 Delete Endurance Group: Not Supported 00:28:05.947 Delete NVM Set: Not Supported 00:28:05.947 Extended LBA Formats Supported: Not Supported 00:28:05.947 Flexible Data Placement Supported: Not Supported 00:28:05.947 00:28:05.947 Controller Memory Buffer Support 00:28:05.947 ================================ 00:28:05.947 Supported: No 00:28:05.947 00:28:05.947 Persistent Memory Region Support 00:28:05.947 ================================ 00:28:05.947 Supported: No 00:28:05.947 00:28:05.947 Admin Command Set Attributes 00:28:05.947 ============================ 00:28:05.947 Security Send/Receive: Not Supported 00:28:05.947 Format NVM: Not Supported 00:28:05.947 Firmware Activate/Download: Not Supported 00:28:05.947 Namespace Management: Not Supported 00:28:05.947 Device Self-Test: Not Supported 00:28:05.947 Directives: Not Supported 00:28:05.947 NVMe-MI: Not Supported 00:28:05.947 Virtualization Management: Not Supported 00:28:05.947 Doorbell Buffer Config: Not Supported 00:28:05.947 Get LBA Status Capability: Not Supported 00:28:05.947 Command & Feature Lockdown Capability: Not Supported 00:28:05.947 Abort Command Limit: 1 00:28:05.947 Async Event Request Limit: 4 00:28:05.947 Number of Firmware Slots: N/A 00:28:05.947 Firmware Slot 1 Read-Only: N/A 00:28:05.947 Firmware Activation Without Reset: N/A 00:28:05.947 Multiple Update Detection Support: N/A 00:28:05.947 Firmware Update Granularity: No Information Provided 00:28:05.947 Per-Namespace SMART Log: No 00:28:05.947 Asymmetric Namespace Access Log Page: Not Supported 00:28:05.947 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:05.947 Command Effects Log Page: Not Supported 00:28:05.947 Get Log Page Extended Data: Supported 00:28:05.947 Telemetry Log Pages: Not Supported 00:28:05.947 Persistent Event Log Pages: Not Supported 00:28:05.947 Supported Log Pages Log Page: May Support 00:28:05.947 Commands Supported & Effects Log Page: Not Supported 00:28:05.947 Feature Identifiers & Effects Log Page:May Support 00:28:05.947 NVMe-MI Commands & Effects Log Page: May Support 00:28:05.947 Data Area 4 for Telemetry Log: Not Supported 00:28:05.947 Error Log Page Entries Supported: 128 00:28:05.947 Keep Alive: Not Supported 00:28:05.947 00:28:05.947 NVM Command Set Attributes 00:28:05.947 ========================== 00:28:05.947 Submission Queue Entry Size 00:28:05.947 Max: 1 00:28:05.947 Min: 1 00:28:05.947 Completion Queue Entry Size 00:28:05.947 Max: 1 00:28:05.947 Min: 1 00:28:05.947 Number of Namespaces: 0 00:28:05.947 Compare Command: Not Supported 00:28:05.947 Write Uncorrectable Command: Not Supported 00:28:05.947 Dataset Management Command: Not Supported 00:28:05.947 Write Zeroes Command: Not Supported 00:28:05.947 Set Features Save Field: Not Supported 00:28:05.947 Reservations: Not Supported 00:28:05.947 Timestamp: Not Supported 00:28:05.948 Copy: Not Supported 00:28:05.948 Volatile Write Cache: Not Present 00:28:05.948 Atomic Write Unit (Normal): 1 00:28:05.948 Atomic Write Unit (PFail): 1 00:28:05.948 Atomic Compare & Write Unit: 1 00:28:05.948 Fused Compare & Write: Supported 00:28:05.948 Scatter-Gather List 00:28:05.948 SGL Command Set: Supported 00:28:05.948 SGL Keyed: Supported 00:28:05.948 SGL Bit Bucket Descriptor: Not Supported 00:28:05.948 SGL Metadata Pointer: Not Supported 00:28:05.948 Oversized SGL: Not Supported 00:28:05.948 SGL Metadata Address: Not Supported 00:28:05.948 SGL Offset: Supported 00:28:05.948 Transport SGL Data Block: Not Supported 00:28:05.948 Replay Protected Memory Block: Not Supported 00:28:05.948 00:28:05.948 Firmware Slot Information 00:28:05.948 ========================= 00:28:05.948 Active slot: 0 00:28:05.948 00:28:05.948 00:28:05.948 Error Log 00:28:05.948 ========= 00:28:05.948 00:28:05.948 Active Namespaces 00:28:05.948 ================= 00:28:05.948 Discovery Log Page 00:28:05.948 ================== 00:28:05.948 Generation Counter: 2 00:28:05.948 Number of Records: 2 00:28:05.948 Record Format: 0 00:28:05.948 00:28:05.948 Discovery Log Entry 0 00:28:05.948 ---------------------- 00:28:05.948 Transport Type: 3 (TCP) 00:28:05.948 Address Family: 1 (IPv4) 00:28:05.948 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:05.948 Entry Flags: 00:28:05.948 Duplicate Returned Information: 1 00:28:05.948 Explicit Persistent Connection Support for Discovery: 1 00:28:05.948 Transport Requirements: 00:28:05.948 Secure Channel: Not Required 00:28:05.948 Port ID: 0 (0x0000) 00:28:05.948 Controller ID: 65535 (0xffff) 00:28:05.948 Admin Max SQ Size: 128 00:28:05.948 Transport Service Identifier: 4420 00:28:05.948 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:05.948 Transport Address: 10.0.0.2 00:28:05.948 Discovery Log Entry 1 00:28:05.948 ---------------------- 00:28:05.948 Transport Type: 3 (TCP) 00:28:05.948 Address Family: 1 (IPv4) 00:28:05.948 Subsystem Type: 2 (NVM Subsystem) 00:28:05.948 Entry Flags: 00:28:05.948 Duplicate Returned Information: 0 00:28:05.948 Explicit Persistent Connection Support for Discovery: 0 00:28:05.948 Transport Requirements: 00:28:05.948 Secure Channel: Not Required 00:28:05.948 Port ID: 0 (0x0000) 00:28:05.948 Controller ID: 65535 (0xffff) 00:28:05.948 Admin Max SQ Size: 128 00:28:05.948 Transport Service Identifier: 4420 00:28:05.948 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:05.948 Transport Address: 10.0.0.2 [2024-07-23 10:48:54.367716] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:05.948 [2024-07-23 10:48:54.367742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.948 [2024-07-23 10:48:54.367755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.948 [2024-07-23 10:48:54.367766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.948 [2024-07-23 10:48:54.367777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.948 [2024-07-23 10:48:54.367797] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.367807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.367815] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.948 [2024-07-23 10:48:54.367827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.948 [2024-07-23 10:48:54.367853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.948 [2024-07-23 10:48:54.367932] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.948 [2024-07-23 10:48:54.367947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.948 [2024-07-23 10:48:54.367954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.367962] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c520) on tqpair=0xbb3030 00:28:05.948 [2024-07-23 10:48:54.367980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.367989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.367997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.948 [2024-07-23 10:48:54.368008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.948 [2024-07-23 10:48:54.368036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.948 [2024-07-23 10:48:54.368140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.948 [2024-07-23 10:48:54.368153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.948 [2024-07-23 10:48:54.368160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c520) on tqpair=0xbb3030 00:28:05.948 [2024-07-23 10:48:54.368178] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:05.948 [2024-07-23 10:48:54.368187] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:05.948 [2024-07-23 10:48:54.368204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.948 [2024-07-23 10:48:54.368231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.948 [2024-07-23 10:48:54.368253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.948 [2024-07-23 10:48:54.368347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.948 [2024-07-23 10:48:54.368359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.948 [2024-07-23 10:48:54.368367] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c520) on tqpair=0xbb3030 00:28:05.948 [2024-07-23 10:48:54.368393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.368410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.948 [2024-07-23 10:48:54.368421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.948 [2024-07-23 10:48:54.368442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.948 [2024-07-23 10:48:54.372510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.948 [2024-07-23 10:48:54.372528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.948 [2024-07-23 10:48:54.372536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.372544] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c520) on tqpair=0xbb3030 00:28:05.948 [2024-07-23 10:48:54.372563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.372573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.372581] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb3030) 00:28:05.948 [2024-07-23 10:48:54.372593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.948 [2024-07-23 10:48:54.372616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0c520, cid 3, qid 0 00:28:05.948 [2024-07-23 10:48:54.372710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.948 [2024-07-23 10:48:54.372723] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.948 [2024-07-23 10:48:54.372735] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.948 [2024-07-23 10:48:54.372743] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc0c520) on tqpair=0xbb3030 00:28:05.948 [2024-07-23 10:48:54.372757] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:05.948 00:28:05.948 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:05.948 [2024-07-23 10:48:54.406392] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.948 [2024-07-23 10:48:54.406446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901457 ] 00:28:05.948 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.211 [2024-07-23 10:48:54.448904] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:06.211 [2024-07-23 10:48:54.448962] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:06.211 [2024-07-23 10:48:54.448973] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:06.211 [2024-07-23 10:48:54.448988] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:06.211 [2024-07-23 10:48:54.449002] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:06.211 [2024-07-23 10:48:54.449150] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:06.211 [2024-07-23 10:48:54.449192] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10d0030 0 00:28:06.211 [2024-07-23 10:48:54.462494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:06.211 [2024-07-23 10:48:54.462516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:06.211 [2024-07-23 10:48:54.462524] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:06.211 [2024-07-23 10:48:54.462531] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:06.211 [2024-07-23 10:48:54.462571] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.462583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.462591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.462607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:06.211 [2024-07-23 10:48:54.462634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.469496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.469515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.469523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469531] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.469553] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:06.211 [2024-07-23 10:48:54.469565] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:06.211 [2024-07-23 10:48:54.469575] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:06.211 [2024-07-23 10:48:54.469598] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469619] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.469633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.469658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.469753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.469768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.469776] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.469798] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:06.211 [2024-07-23 10:48:54.469814] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:06.211 [2024-07-23 10:48:54.469827] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469836] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469843] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.469855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.469877] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.469964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.469979] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.469987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.469994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.470005] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:06.211 [2024-07-23 10:48:54.470020] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470033] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.470060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.470082] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.470170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.470185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.470192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.470211] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.470257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.470286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.470374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.470388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.470395] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.470413] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:06.211 [2024-07-23 10:48:54.470423] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470437] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470550] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:06.211 [2024-07-23 10:48:54.470560] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470573] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470582] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.470601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.470624] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.470712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.470726] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.470734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.470752] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:06.211 [2024-07-23 10:48:54.470770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470786] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.470798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.470820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.211 [2024-07-23 10:48:54.470915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.211 [2024-07-23 10:48:54.470927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.211 [2024-07-23 10:48:54.470935] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.470942] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.211 [2024-07-23 10:48:54.470952] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:06.211 [2024-07-23 10:48:54.470962] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:06.211 [2024-07-23 10:48:54.470976] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:06.211 [2024-07-23 10:48:54.470993] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:06.211 [2024-07-23 10:48:54.471013] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.211 [2024-07-23 10:48:54.471022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.211 [2024-07-23 10:48:54.471034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.211 [2024-07-23 10:48:54.471056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.212 [2024-07-23 10:48:54.471188] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.212 [2024-07-23 10:48:54.471203] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.212 [2024-07-23 10:48:54.471211] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.471218] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=4096, cccid=0 00:28:06.212 [2024-07-23 10:48:54.471227] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129100) on tqpair(0x10d0030): expected_datao=0, payload_size=4096 00:28:06.212 [2024-07-23 10:48:54.471236] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.471254] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.471264] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.212 [2024-07-23 10:48:54.511579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.212 [2024-07-23 10:48:54.511588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511596] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.212 [2024-07-23 10:48:54.511615] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:06.212 [2024-07-23 10:48:54.511626] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:06.212 [2024-07-23 10:48:54.511634] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:06.212 [2024-07-23 10:48:54.511642] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:06.212 [2024-07-23 10:48:54.511651] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:06.212 [2024-07-23 10:48:54.511660] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.511676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.511690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511706] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.511719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:06.212 [2024-07-23 10:48:54.511742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.212 [2024-07-23 10:48:54.511832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.212 [2024-07-23 10:48:54.511845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.212 [2024-07-23 10:48:54.511853] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511861] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129100) on tqpair=0x10d0030 00:28:06.212 [2024-07-23 10:48:54.511874] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.511905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.212 [2024-07-23 10:48:54.511917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.511941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.212 [2024-07-23 10:48:54.511952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511959] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.511976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.212 [2024-07-23 10:48:54.511987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.511994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.512011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.212 [2024-07-23 10:48:54.512020] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512040] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512054] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512061] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.512073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.212 [2024-07-23 10:48:54.512096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129100, cid 0, qid 0 00:28:06.212 [2024-07-23 10:48:54.512108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129260, cid 1, qid 0 00:28:06.212 [2024-07-23 10:48:54.512116] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11293c0, cid 2, qid 0 00:28:06.212 [2024-07-23 10:48:54.512125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.212 [2024-07-23 10:48:54.512134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.212 [2024-07-23 10:48:54.512257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.212 [2024-07-23 10:48:54.512272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.212 [2024-07-23 10:48:54.512279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512287] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.212 [2024-07-23 10:48:54.512297] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:06.212 [2024-07-23 10:48:54.512307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512322] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512350] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.512378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:06.212 [2024-07-23 10:48:54.512399] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.212 [2024-07-23 10:48:54.512492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.212 [2024-07-23 10:48:54.512507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.212 [2024-07-23 10:48:54.512514] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.212 [2024-07-23 10:48:54.512599] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512620] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.512662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.212 [2024-07-23 10:48:54.512684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.212 [2024-07-23 10:48:54.512790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.212 [2024-07-23 10:48:54.512806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.212 [2024-07-23 10:48:54.512813] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512820] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=4096, cccid=4 00:28:06.212 [2024-07-23 10:48:54.512829] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129680) on tqpair(0x10d0030): expected_datao=0, payload_size=4096 00:28:06.212 [2024-07-23 10:48:54.512838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512850] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512858] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.212 [2024-07-23 10:48:54.512882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.212 [2024-07-23 10:48:54.512889] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.212 [2024-07-23 10:48:54.512913] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:06.212 [2024-07-23 10:48:54.512932] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512950] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:06.212 [2024-07-23 10:48:54.512964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.212 [2024-07-23 10:48:54.512973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.212 [2024-07-23 10:48:54.512985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.212 [2024-07-23 10:48:54.513010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.212 [2024-07-23 10:48:54.513124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.212 [2024-07-23 10:48:54.513139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.212 [2024-07-23 10:48:54.513146] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513154] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=4096, cccid=4 00:28:06.213 [2024-07-23 10:48:54.513162] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129680) on tqpair(0x10d0030): expected_datao=0, payload_size=4096 00:28:06.213 [2024-07-23 10:48:54.513171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513182] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513190] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.513214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.513221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.513251] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.513271] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.513286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.513306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.513328] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.213 [2024-07-23 10:48:54.513432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.213 [2024-07-23 10:48:54.513445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.213 [2024-07-23 10:48:54.513452] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.513459] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=4096, cccid=4 00:28:06.213 [2024-07-23 10:48:54.513468] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129680) on tqpair(0x10d0030): expected_datao=0, payload_size=4096 00:28:06.213 [2024-07-23 10:48:54.513477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517505] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517515] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.517540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.517547] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.517579] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517596] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517614] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517630] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517647] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517657] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:06.213 [2024-07-23 10:48:54.517665] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:06.213 [2024-07-23 10:48:54.517675] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:06.213 [2024-07-23 10:48:54.517700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.517722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.517734] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.517760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.213 [2024-07-23 10:48:54.517787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.213 [2024-07-23 10:48:54.517799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11297e0, cid 5, qid 0 00:28:06.213 [2024-07-23 10:48:54.517906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.517921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.517928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517936] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.517949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.517960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.517967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.517975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11297e0) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.517993] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518041] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11297e0, cid 5, qid 0 00:28:06.213 [2024-07-23 10:48:54.518131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.518146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.518153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11297e0) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.518179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518189] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518225] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11297e0, cid 5, qid 0 00:28:06.213 [2024-07-23 10:48:54.518319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.518332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.518340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518347] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11297e0) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.518366] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518407] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11297e0, cid 5, qid 0 00:28:06.213 [2024-07-23 10:48:54.518498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.213 [2024-07-23 10:48:54.518512] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.213 [2024-07-23 10:48:54.518520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518528] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11297e0) on tqpair=0x10d0030 00:28:06.213 [2024-07-23 10:48:54.518549] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518654] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10d0030) 00:28:06.213 [2024-07-23 10:48:54.518665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.213 [2024-07-23 10:48:54.518688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11297e0, cid 5, qid 0 00:28:06.213 [2024-07-23 10:48:54.518699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129680, cid 4, qid 0 00:28:06.213 [2024-07-23 10:48:54.518708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129940, cid 6, qid 0 00:28:06.213 [2024-07-23 10:48:54.518717] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129aa0, cid 7, qid 0 00:28:06.213 [2024-07-23 10:48:54.518888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.213 [2024-07-23 10:48:54.518904] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.213 [2024-07-23 10:48:54.518912] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518919] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=8192, cccid=5 00:28:06.213 [2024-07-23 10:48:54.518928] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11297e0) on tqpair(0x10d0030): expected_datao=0, payload_size=8192 00:28:06.213 [2024-07-23 10:48:54.518940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.213 [2024-07-23 10:48:54.518965] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.518976] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.518986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.214 [2024-07-23 10:48:54.518996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.214 [2024-07-23 10:48:54.519003] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519010] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=512, cccid=4 00:28:06.214 [2024-07-23 10:48:54.519019] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129680) on tqpair(0x10d0030): expected_datao=0, payload_size=512 00:28:06.214 [2024-07-23 10:48:54.519027] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519038] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519046] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.214 [2024-07-23 10:48:54.519065] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.214 [2024-07-23 10:48:54.519073] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519080] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=512, cccid=6 00:28:06.214 [2024-07-23 10:48:54.519088] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129940) on tqpair(0x10d0030): expected_datao=0, payload_size=512 00:28:06.214 [2024-07-23 10:48:54.519096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519106] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519114] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.214 [2024-07-23 10:48:54.519134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.214 [2024-07-23 10:48:54.519141] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519148] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0030): datao=0, datal=4096, cccid=7 00:28:06.214 [2024-07-23 10:48:54.519156] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1129aa0) on tqpair(0x10d0030): expected_datao=0, payload_size=4096 00:28:06.214 [2024-07-23 10:48:54.519164] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519175] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519183] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.214 [2024-07-23 10:48:54.519206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.214 [2024-07-23 10:48:54.519213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11297e0) on tqpair=0x10d0030 00:28:06.214 [2024-07-23 10:48:54.519243] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.214 [2024-07-23 10:48:54.519255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.214 [2024-07-23 10:48:54.519262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129680) on tqpair=0x10d0030 00:28:06.214 [2024-07-23 10:48:54.519286] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.214 [2024-07-23 10:48:54.519298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.214 [2024-07-23 10:48:54.519305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129940) on tqpair=0x10d0030 00:28:06.214 [2024-07-23 10:48:54.519332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.214 [2024-07-23 10:48:54.519344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.214 [2024-07-23 10:48:54.519351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.214 [2024-07-23 10:48:54.519358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129aa0) on tqpair=0x10d0030 00:28:06.214 ===================================================== 00:28:06.214 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.214 ===================================================== 00:28:06.214 Controller Capabilities/Features 00:28:06.214 ================================ 00:28:06.214 Vendor ID: 8086 00:28:06.214 Subsystem Vendor ID: 8086 00:28:06.214 Serial Number: SPDK00000000000001 00:28:06.214 Model Number: SPDK bdev Controller 00:28:06.214 Firmware Version: 24.05.1 00:28:06.214 Recommended Arb Burst: 6 00:28:06.214 IEEE OUI Identifier: e4 d2 5c 00:28:06.214 Multi-path I/O 00:28:06.214 May have multiple subsystem ports: Yes 00:28:06.214 May have multiple controllers: Yes 00:28:06.214 Associated with SR-IOV VF: No 00:28:06.214 Max Data Transfer Size: 131072 00:28:06.214 Max Number of Namespaces: 32 00:28:06.214 Max Number of I/O Queues: 127 00:28:06.214 NVMe Specification Version (VS): 1.3 00:28:06.214 NVMe Specification Version (Identify): 1.3 00:28:06.214 Maximum Queue Entries: 128 00:28:06.214 Contiguous Queues Required: Yes 00:28:06.214 Arbitration Mechanisms Supported 00:28:06.214 Weighted Round Robin: Not Supported 00:28:06.214 Vendor Specific: Not Supported 00:28:06.214 Reset Timeout: 15000 ms 00:28:06.214 Doorbell Stride: 4 bytes 00:28:06.214 NVM Subsystem Reset: Not Supported 00:28:06.214 Command Sets Supported 00:28:06.214 NVM Command Set: Supported 00:28:06.214 Boot Partition: Not Supported 00:28:06.214 Memory Page Size Minimum: 4096 bytes 00:28:06.214 Memory Page Size Maximum: 4096 bytes 00:28:06.214 Persistent Memory Region: Not Supported 00:28:06.214 Optional Asynchronous Events Supported 00:28:06.214 Namespace Attribute Notices: Supported 00:28:06.214 Firmware Activation Notices: Not Supported 00:28:06.214 ANA Change Notices: Not Supported 00:28:06.214 PLE Aggregate Log Change Notices: Not Supported 00:28:06.214 LBA Status Info Alert Notices: Not Supported 00:28:06.214 EGE Aggregate Log Change Notices: Not Supported 00:28:06.214 Normal NVM Subsystem Shutdown event: Not Supported 00:28:06.214 Zone Descriptor Change Notices: Not Supported 00:28:06.214 Discovery Log Change Notices: Not Supported 00:28:06.214 Controller Attributes 00:28:06.214 128-bit Host Identifier: Supported 00:28:06.214 Non-Operational Permissive Mode: Not Supported 00:28:06.214 NVM Sets: Not Supported 00:28:06.214 Read Recovery Levels: Not Supported 00:28:06.214 Endurance Groups: Not Supported 00:28:06.214 Predictable Latency Mode: Not Supported 00:28:06.214 Traffic Based Keep ALive: Not Supported 00:28:06.214 Namespace Granularity: Not Supported 00:28:06.214 SQ Associations: Not Supported 00:28:06.214 UUID List: Not Supported 00:28:06.214 Multi-Domain Subsystem: Not Supported 00:28:06.214 Fixed Capacity Management: Not Supported 00:28:06.214 Variable Capacity Management: Not Supported 00:28:06.214 Delete Endurance Group: Not Supported 00:28:06.214 Delete NVM Set: Not Supported 00:28:06.214 Extended LBA Formats Supported: Not Supported 00:28:06.214 Flexible Data Placement Supported: Not Supported 00:28:06.214 00:28:06.214 Controller Memory Buffer Support 00:28:06.214 ================================ 00:28:06.214 Supported: No 00:28:06.214 00:28:06.214 Persistent Memory Region Support 00:28:06.214 ================================ 00:28:06.214 Supported: No 00:28:06.214 00:28:06.214 Admin Command Set Attributes 00:28:06.214 ============================ 00:28:06.214 Security Send/Receive: Not Supported 00:28:06.214 Format NVM: Not Supported 00:28:06.214 Firmware Activate/Download: Not Supported 00:28:06.214 Namespace Management: Not Supported 00:28:06.214 Device Self-Test: Not Supported 00:28:06.214 Directives: Not Supported 00:28:06.214 NVMe-MI: Not Supported 00:28:06.214 Virtualization Management: Not Supported 00:28:06.214 Doorbell Buffer Config: Not Supported 00:28:06.214 Get LBA Status Capability: Not Supported 00:28:06.214 Command & Feature Lockdown Capability: Not Supported 00:28:06.214 Abort Command Limit: 4 00:28:06.214 Async Event Request Limit: 4 00:28:06.214 Number of Firmware Slots: N/A 00:28:06.214 Firmware Slot 1 Read-Only: N/A 00:28:06.214 Firmware Activation Without Reset: N/A 00:28:06.214 Multiple Update Detection Support: N/A 00:28:06.214 Firmware Update Granularity: No Information Provided 00:28:06.214 Per-Namespace SMART Log: No 00:28:06.214 Asymmetric Namespace Access Log Page: Not Supported 00:28:06.214 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:06.214 Command Effects Log Page: Supported 00:28:06.214 Get Log Page Extended Data: Supported 00:28:06.214 Telemetry Log Pages: Not Supported 00:28:06.214 Persistent Event Log Pages: Not Supported 00:28:06.214 Supported Log Pages Log Page: May Support 00:28:06.214 Commands Supported & Effects Log Page: Not Supported 00:28:06.214 Feature Identifiers & Effects Log Page:May Support 00:28:06.214 NVMe-MI Commands & Effects Log Page: May Support 00:28:06.214 Data Area 4 for Telemetry Log: Not Supported 00:28:06.214 Error Log Page Entries Supported: 128 00:28:06.214 Keep Alive: Supported 00:28:06.214 Keep Alive Granularity: 10000 ms 00:28:06.214 00:28:06.214 NVM Command Set Attributes 00:28:06.214 ========================== 00:28:06.214 Submission Queue Entry Size 00:28:06.214 Max: 64 00:28:06.214 Min: 64 00:28:06.214 Completion Queue Entry Size 00:28:06.214 Max: 16 00:28:06.214 Min: 16 00:28:06.214 Number of Namespaces: 32 00:28:06.214 Compare Command: Supported 00:28:06.214 Write Uncorrectable Command: Not Supported 00:28:06.215 Dataset Management Command: Supported 00:28:06.215 Write Zeroes Command: Supported 00:28:06.215 Set Features Save Field: Not Supported 00:28:06.215 Reservations: Supported 00:28:06.215 Timestamp: Not Supported 00:28:06.215 Copy: Supported 00:28:06.215 Volatile Write Cache: Present 00:28:06.215 Atomic Write Unit (Normal): 1 00:28:06.215 Atomic Write Unit (PFail): 1 00:28:06.215 Atomic Compare & Write Unit: 1 00:28:06.215 Fused Compare & Write: Supported 00:28:06.215 Scatter-Gather List 00:28:06.215 SGL Command Set: Supported 00:28:06.215 SGL Keyed: Supported 00:28:06.215 SGL Bit Bucket Descriptor: Not Supported 00:28:06.215 SGL Metadata Pointer: Not Supported 00:28:06.215 Oversized SGL: Not Supported 00:28:06.215 SGL Metadata Address: Not Supported 00:28:06.215 SGL Offset: Supported 00:28:06.215 Transport SGL Data Block: Not Supported 00:28:06.215 Replay Protected Memory Block: Not Supported 00:28:06.215 00:28:06.215 Firmware Slot Information 00:28:06.215 ========================= 00:28:06.215 Active slot: 1 00:28:06.215 Slot 1 Firmware Revision: 24.05.1 00:28:06.215 00:28:06.215 00:28:06.215 Commands Supported and Effects 00:28:06.215 ============================== 00:28:06.215 Admin Commands 00:28:06.215 -------------- 00:28:06.215 Get Log Page (02h): Supported 00:28:06.215 Identify (06h): Supported 00:28:06.215 Abort (08h): Supported 00:28:06.215 Set Features (09h): Supported 00:28:06.215 Get Features (0Ah): Supported 00:28:06.215 Asynchronous Event Request (0Ch): Supported 00:28:06.215 Keep Alive (18h): Supported 00:28:06.215 I/O Commands 00:28:06.215 ------------ 00:28:06.215 Flush (00h): Supported LBA-Change 00:28:06.215 Write (01h): Supported LBA-Change 00:28:06.215 Read (02h): Supported 00:28:06.215 Compare (05h): Supported 00:28:06.215 Write Zeroes (08h): Supported LBA-Change 00:28:06.215 Dataset Management (09h): Supported LBA-Change 00:28:06.215 Copy (19h): Supported LBA-Change 00:28:06.215 Unknown (79h): Supported LBA-Change 00:28:06.215 Unknown (7Ah): Supported 00:28:06.215 00:28:06.215 Error Log 00:28:06.215 ========= 00:28:06.215 00:28:06.215 Arbitration 00:28:06.215 =========== 00:28:06.215 Arbitration Burst: 1 00:28:06.215 00:28:06.215 Power Management 00:28:06.215 ================ 00:28:06.215 Number of Power States: 1 00:28:06.215 Current Power State: Power State #0 00:28:06.215 Power State #0: 00:28:06.215 Max Power: 0.00 W 00:28:06.215 Non-Operational State: Operational 00:28:06.215 Entry Latency: Not Reported 00:28:06.215 Exit Latency: Not Reported 00:28:06.215 Relative Read Throughput: 0 00:28:06.215 Relative Read Latency: 0 00:28:06.215 Relative Write Throughput: 0 00:28:06.215 Relative Write Latency: 0 00:28:06.215 Idle Power: Not Reported 00:28:06.215 Active Power: Not Reported 00:28:06.215 Non-Operational Permissive Mode: Not Supported 00:28:06.215 00:28:06.215 Health Information 00:28:06.215 ================== 00:28:06.215 Critical Warnings: 00:28:06.215 Available Spare Space: OK 00:28:06.215 Temperature: OK 00:28:06.215 Device Reliability: OK 00:28:06.215 Read Only: No 00:28:06.215 Volatile Memory Backup: OK 00:28:06.215 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:06.215 Temperature Threshold: [2024-07-23 10:48:54.519505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519519] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10d0030) 00:28:06.215 [2024-07-23 10:48:54.519531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.215 [2024-07-23 10:48:54.519554] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129aa0, cid 7, qid 0 00:28:06.215 [2024-07-23 10:48:54.519659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.215 [2024-07-23 10:48:54.519672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.215 [2024-07-23 10:48:54.519680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519687] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129aa0) on tqpair=0x10d0030 00:28:06.215 [2024-07-23 10:48:54.519731] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:06.215 [2024-07-23 10:48:54.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.215 [2024-07-23 10:48:54.519767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.215 [2024-07-23 10:48:54.519778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.215 [2024-07-23 10:48:54.519789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.215 [2024-07-23 10:48:54.519803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519811] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519819] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.215 [2024-07-23 10:48:54.519830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.215 [2024-07-23 10:48:54.519853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.215 [2024-07-23 10:48:54.519935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.215 [2024-07-23 10:48:54.519948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.215 [2024-07-23 10:48:54.519956] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.215 [2024-07-23 10:48:54.519977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.519992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.215 [2024-07-23 10:48:54.520004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.215 [2024-07-23 10:48:54.520030] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.215 [2024-07-23 10:48:54.520136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.215 [2024-07-23 10:48:54.520151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.215 [2024-07-23 10:48:54.520158] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.520169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.215 [2024-07-23 10:48:54.520180] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:06.215 [2024-07-23 10:48:54.520189] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:06.215 [2024-07-23 10:48:54.520207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.520216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.520224] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.215 [2024-07-23 10:48:54.520235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.215 [2024-07-23 10:48:54.520256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.215 [2024-07-23 10:48:54.520347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.215 [2024-07-23 10:48:54.520362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.215 [2024-07-23 10:48:54.520369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.215 [2024-07-23 10:48:54.520377] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.215 [2024-07-23 10:48:54.520396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.520424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.520445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.520547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.520561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.520568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520576] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.520595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.520623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.520645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.520731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.520746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.520753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.520780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.520808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.520829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.520917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.520933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.520941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.520967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.520984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.520996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.521017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.521105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.521119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.521127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.521134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.521153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.521163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.521170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.521181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.521203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.524491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.524509] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.524517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.524525] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.524546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.524556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.524564] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0030) 00:28:06.216 [2024-07-23 10:48:54.524576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.216 [2024-07-23 10:48:54.524599] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1129520, cid 3, qid 0 00:28:06.216 [2024-07-23 10:48:54.524707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.216 [2024-07-23 10:48:54.524721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.216 [2024-07-23 10:48:54.524728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.216 [2024-07-23 10:48:54.524736] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1129520) on tqpair=0x10d0030 00:28:06.216 [2024-07-23 10:48:54.524751] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:06.216 0 Kelvin (-273 Celsius) 00:28:06.216 Available Spare: 0% 00:28:06.216 Available Spare Threshold: 0% 00:28:06.216 Life Percentage Used: 0% 00:28:06.216 Data Units Read: 0 00:28:06.216 Data Units Written: 0 00:28:06.216 Host Read Commands: 0 00:28:06.216 Host Write Commands: 0 00:28:06.216 Controller Busy Time: 0 minutes 00:28:06.216 Power Cycles: 0 00:28:06.216 Power On Hours: 0 hours 00:28:06.216 Unsafe Shutdowns: 0 00:28:06.216 Unrecoverable Media Errors: 0 00:28:06.216 Lifetime Error Log Entries: 0 00:28:06.216 Warning Temperature Time: 0 minutes 00:28:06.216 Critical Temperature Time: 0 minutes 00:28:06.216 00:28:06.216 Number of Queues 00:28:06.216 ================ 00:28:06.216 Number of I/O Submission Queues: 127 00:28:06.216 Number of I/O Completion Queues: 127 00:28:06.216 00:28:06.216 Active Namespaces 00:28:06.216 ================= 00:28:06.216 Namespace ID:1 00:28:06.216 Error Recovery Timeout: Unlimited 00:28:06.216 Command Set Identifier: NVM (00h) 00:28:06.216 Deallocate: Supported 00:28:06.216 Deallocated/Unwritten Error: Not Supported 00:28:06.216 Deallocated Read Value: Unknown 00:28:06.216 Deallocate in Write Zeroes: Not Supported 00:28:06.216 Deallocated Guard Field: 0xFFFF 00:28:06.216 Flush: Supported 00:28:06.216 Reservation: Supported 00:28:06.216 Namespace Sharing Capabilities: Multiple Controllers 00:28:06.216 Size (in LBAs): 131072 (0GiB) 00:28:06.216 Capacity (in LBAs): 131072 (0GiB) 00:28:06.216 Utilization (in LBAs): 131072 (0GiB) 00:28:06.216 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:06.216 EUI64: ABCDEF0123456789 00:28:06.216 UUID: dabf9f3d-a538-4ff3-b01a-a2f8a6248d43 00:28:06.216 Thin Provisioning: Not Supported 00:28:06.216 Per-NS Atomic Units: Yes 00:28:06.216 Atomic Boundary Size (Normal): 0 00:28:06.216 Atomic Boundary Size (PFail): 0 00:28:06.216 Atomic Boundary Offset: 0 00:28:06.216 Maximum Single Source Range Length: 65535 00:28:06.216 Maximum Copy Length: 65535 00:28:06.216 Maximum Source Range Count: 1 00:28:06.216 NGUID/EUI64 Never Reused: No 00:28:06.216 Namespace Write Protected: No 00:28:06.216 Number of LBA Formats: 1 00:28:06.216 Current LBA Format: LBA Format #00 00:28:06.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:06.216 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.216 rmmod nvme_tcp 00:28:06.216 rmmod nvme_fabrics 00:28:06.216 rmmod nvme_keyring 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3901343 ']' 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3901343 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3901343 ']' 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3901343 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3901343 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3901343' 00:28:06.216 killing process with pid 3901343 00:28:06.216 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3901343 00:28:06.217 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3901343 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.477 10:48:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.389 10:48:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.389 00:28:08.389 real 0m4.947s 00:28:08.389 user 0m4.089s 00:28:08.389 sys 0m1.593s 00:28:08.389 10:48:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.389 10:48:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 ************************************ 00:28:08.389 END TEST nvmf_identify 00:28:08.389 ************************************ 00:28:08.389 10:48:56 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.389 10:48:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:08.389 10:48:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.389 10:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.648 ************************************ 00:28:08.648 START TEST nvmf_perf 00:28:08.648 ************************************ 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.648 * Looking for test storage... 00:28:08.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.648 10:48:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.649 10:48:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:10.557 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:10.557 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:10.558 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:10.558 Found net devices under 0000:08:00.0: cvl_0_0 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:10.558 Found net devices under 0000:08:00.1: cvl_0_1 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:28:10.558 00:28:10.558 --- 10.0.0.2 ping statistics --- 00:28:10.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.558 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:10.558 00:28:10.558 --- 10.0.0.1 ping statistics --- 00:28:10.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.558 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3902878 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3902878 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3902878 ']' 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.558 10:48:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:10.558 [2024-07-23 10:48:58.781914] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:10.558 [2024-07-23 10:48:58.782012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.558 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.558 [2024-07-23 10:48:58.847258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.558 [2024-07-23 10:48:58.935153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.558 [2024-07-23 10:48:58.935215] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.558 [2024-07-23 10:48:58.935230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.558 [2024-07-23 10:48:58.935244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.558 [2024-07-23 10:48:58.935256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.558 [2024-07-23 10:48:58.935367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.558 [2024-07-23 10:48:58.935441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.558 [2024-07-23 10:48:58.935501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.558 [2024-07-23 10:48:58.935506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.558 10:48:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.558 10:48:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:10.558 10:48:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.558 10:48:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.558 10:48:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:10.816 10:48:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.816 10:48:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:10.816 10:48:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:14.102 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:14.102 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:14.102 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:28:14.102 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:14.361 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:14.361 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:28:14.361 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:14.361 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:14.361 10:49:02 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:14.619 [2024-07-23 10:49:03.086144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.619 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.215 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:15.215 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.479 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:15.479 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:15.738 10:49:03 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.995 [2024-07-23 10:49:04.274421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.995 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.253 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:28:16.253 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:16.253 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:16.253 10:49:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:17.631 Initializing NVMe Controllers 00:28:17.631 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:28:17.631 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:28:17.631 Initialization complete. Launching workers. 00:28:17.631 ======================================================== 00:28:17.631 Latency(us) 00:28:17.631 Device Information : IOPS MiB/s Average min max 00:28:17.631 PCIE (0000:84:00.0) NSID 1 from core 0: 66000.89 257.82 484.28 34.68 6382.35 00:28:17.631 ======================================================== 00:28:17.631 Total : 66000.89 257.82 484.28 34.68 6382.35 00:28:17.631 00:28:17.631 10:49:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:17.631 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.564 Initializing NVMe Controllers 00:28:18.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:18.564 Initialization complete. Launching workers. 00:28:18.564 ======================================================== 00:28:18.564 Latency(us) 00:28:18.564 Device Information : IOPS MiB/s Average min max 00:28:18.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10135.76 223.22 45969.77 00:28:18.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16329.90 7001.16 47890.55 00:28:18.564 ======================================================== 00:28:18.564 Total : 162.00 0.63 12506.36 223.22 47890.55 00:28:18.565 00:28:18.565 10:49:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.824 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.201 Initializing NVMe Controllers 00:28:20.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:20.201 Initialization complete. Launching workers. 00:28:20.201 ======================================================== 00:28:20.201 Latency(us) 00:28:20.201 Device Information : IOPS MiB/s Average min max 00:28:20.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7916.48 30.92 4043.80 678.23 10521.03 00:28:20.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3870.41 15.12 8295.02 6568.96 16019.88 00:28:20.201 ======================================================== 00:28:20.201 Total : 11786.90 46.04 5439.75 678.23 16019.88 00:28:20.201 00:28:20.201 10:49:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:20.201 10:49:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:20.201 10:49:08 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.201 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.739 Initializing NVMe Controllers 00:28:22.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.739 Controller IO queue size 128, less than required. 00:28:22.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.739 Controller IO queue size 128, less than required. 00:28:22.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:22.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:22.739 Initialization complete. Launching workers. 00:28:22.739 ======================================================== 00:28:22.739 Latency(us) 00:28:22.739 Device Information : IOPS MiB/s Average min max 00:28:22.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1690.04 422.51 76942.42 50249.15 141093.12 00:28:22.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 551.35 137.84 239565.82 63732.18 405242.89 00:28:22.739 ======================================================== 00:28:22.739 Total : 2241.39 560.35 116945.45 50249.15 405242.89 00:28:22.739 00:28:22.739 10:49:10 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:22.739 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.739 No valid NVMe controllers or AIO or URING devices found 00:28:22.739 Initializing NVMe Controllers 00:28:22.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.739 Controller IO queue size 128, less than required. 00:28:22.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.739 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:22.739 Controller IO queue size 128, less than required. 00:28:22.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:22.739 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:22.739 WARNING: Some requested NVMe devices were skipped 00:28:22.739 10:49:10 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:22.739 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.275 Initializing NVMe Controllers 00:28:25.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.275 Controller IO queue size 128, less than required. 00:28:25.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.275 Controller IO queue size 128, less than required. 00:28:25.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:25.275 Initialization complete. Launching workers. 00:28:25.275 00:28:25.275 ==================== 00:28:25.275 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:25.275 TCP transport: 00:28:25.275 polls: 8474 00:28:25.275 idle_polls: 4833 00:28:25.275 sock_completions: 3641 00:28:25.275 nvme_completions: 5981 00:28:25.275 submitted_requests: 8920 00:28:25.275 queued_requests: 1 00:28:25.275 00:28:25.275 ==================== 00:28:25.275 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:25.275 TCP transport: 00:28:25.275 polls: 10997 00:28:25.275 idle_polls: 7108 00:28:25.275 sock_completions: 3889 00:28:25.275 nvme_completions: 6301 00:28:25.275 submitted_requests: 9328 00:28:25.275 queued_requests: 1 00:28:25.275 ======================================================== 00:28:25.275 Latency(us) 00:28:25.275 Device Information : IOPS MiB/s Average min max 00:28:25.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1495.00 373.75 87220.16 58106.18 127533.15 00:28:25.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1575.00 393.75 81602.30 39797.30 119288.51 00:28:25.275 ======================================================== 00:28:25.275 Total : 3069.99 767.50 84338.03 39797.30 127533.15 00:28:25.275 00:28:25.275 10:49:13 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:25.275 10:49:13 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.275 10:49:13 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:25.275 10:49:13 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:84:00.0 ']' 00:28:25.275 10:49:13 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=54e98d94-6b11-4459-a3d5-3755290caf48 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 54e98d94-6b11-4459-a3d5-3755290caf48 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=54e98d94-6b11-4459-a3d5-3755290caf48 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:28.556 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.813 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:28.813 { 00:28:28.813 "uuid": "54e98d94-6b11-4459-a3d5-3755290caf48", 00:28:28.813 "name": "lvs_0", 00:28:28.813 "base_bdev": "Nvme0n1", 00:28:28.813 "total_data_clusters": 238234, 00:28:28.813 "free_clusters": 238234, 00:28:28.813 "block_size": 512, 00:28:28.813 "cluster_size": 4194304 00:28:28.813 } 00:28:28.813 ]' 00:28:28.813 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="54e98d94-6b11-4459-a3d5-3755290caf48") .free_clusters' 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="54e98d94-6b11-4459-a3d5-3755290caf48") .cluster_size' 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:29.071 952936 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:29.071 10:49:17 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54e98d94-6b11-4459-a3d5-3755290caf48 lbd_0 20480 00:28:29.638 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=34fcce1c-a20b-45ba-aa48-4ac875935140 00:28:29.638 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 34fcce1c-a20b-45ba-aa48-4ac875935140 lvs_n_0 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:30.574 10:49:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:30.832 { 00:28:30.832 "uuid": "54e98d94-6b11-4459-a3d5-3755290caf48", 00:28:30.832 "name": "lvs_0", 00:28:30.832 "base_bdev": "Nvme0n1", 00:28:30.832 "total_data_clusters": 238234, 00:28:30.832 "free_clusters": 233114, 00:28:30.832 "block_size": 512, 00:28:30.832 "cluster_size": 4194304 00:28:30.832 }, 00:28:30.832 { 00:28:30.832 "uuid": "24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9", 00:28:30.832 "name": "lvs_n_0", 00:28:30.832 "base_bdev": "34fcce1c-a20b-45ba-aa48-4ac875935140", 00:28:30.832 "total_data_clusters": 5114, 00:28:30.832 "free_clusters": 5114, 00:28:30.832 "block_size": 512, 00:28:30.832 "cluster_size": 4194304 00:28:30.832 } 00:28:30.832 ]' 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9") .free_clusters' 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9") .cluster_size' 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:30.832 20456 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:30.832 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24e44cda-03e6-4f54-9ecf-8f4f7bfc75d9 lbd_nest_0 20456 00:28:31.090 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5693443b-1eb0-41da-976e-a2e78c354afa 00:28:31.090 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.348 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:31.348 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5693443b-1eb0-41da-976e-a2e78c354afa 00:28:31.606 10:49:19 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.864 10:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:31.864 10:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:31.864 10:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:31.864 10:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.864 10:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.864 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.076 Initializing NVMe Controllers 00:28:44.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.076 Initialization complete. Launching workers. 00:28:44.076 ======================================================== 00:28:44.076 Latency(us) 00:28:44.076 Device Information : IOPS MiB/s Average min max 00:28:44.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.78 0.02 21442.90 184.82 46489.45 00:28:44.076 ======================================================== 00:28:44.076 Total : 46.78 0.02 21442.90 184.82 46489.45 00:28:44.076 00:28:44.076 10:49:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:44.076 10:49:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.076 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.058 Initializing NVMe Controllers 00:28:54.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.058 Initialization complete. Launching workers. 00:28:54.058 ======================================================== 00:28:54.058 Latency(us) 00:28:54.058 Device Information : IOPS MiB/s Average min max 00:28:54.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.10 9.01 13887.80 6051.85 47904.05 00:28:54.058 ======================================================== 00:28:54.058 Total : 72.10 9.01 13887.80 6051.85 47904.05 00:28:54.058 00:28:54.058 10:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:54.058 10:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:54.058 10:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.058 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.042 Initializing NVMe Controllers 00:29:04.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.042 Initialization complete. Launching workers. 00:29:04.042 ======================================================== 00:29:04.042 Latency(us) 00:29:04.042 Device Information : IOPS MiB/s Average min max 00:29:04.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6743.10 3.29 4745.47 376.21 12194.69 00:29:04.042 ======================================================== 00:29:04.042 Total : 6743.10 3.29 4745.47 376.21 12194.69 00:29:04.042 00:29:04.042 10:49:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.042 10:49:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.042 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.061 Initializing NVMe Controllers 00:29:14.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.061 Initialization complete. Launching workers. 00:29:14.061 ======================================================== 00:29:14.061 Latency(us) 00:29:14.061 Device Information : IOPS MiB/s Average min max 00:29:14.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3694.21 461.78 8664.12 882.05 20041.85 00:29:14.061 ======================================================== 00:29:14.061 Total : 3694.21 461.78 8664.12 882.05 20041.85 00:29:14.061 00:29:14.061 10:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:14.061 10:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.061 10:50:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.061 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.038 Initializing NVMe Controllers 00:29:24.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.038 Controller IO queue size 128, less than required. 00:29:24.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:24.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.038 Initialization complete. Launching workers. 00:29:24.038 ======================================================== 00:29:24.038 Latency(us) 00:29:24.038 Device Information : IOPS MiB/s Average min max 00:29:24.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10252.20 5.01 12490.22 1885.02 25078.02 00:29:24.038 ======================================================== 00:29:24.038 Total : 10252.20 5.01 12490.22 1885.02 25078.02 00:29:24.038 00:29:24.038 10:50:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:24.038 10:50:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.038 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.011 Initializing NVMe Controllers 00:29:34.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.011 Controller IO queue size 128, less than required. 00:29:34.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.011 Initialization complete. Launching workers. 00:29:34.011 ======================================================== 00:29:34.011 Latency(us) 00:29:34.011 Device Information : IOPS MiB/s Average min max 00:29:34.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1165.40 145.67 110204.05 10322.60 249471.65 00:29:34.012 ======================================================== 00:29:34.012 Total : 1165.40 145.67 110204.05 10322.60 249471.65 00:29:34.012 00:29:34.012 10:50:22 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.269 10:50:22 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5693443b-1eb0-41da-976e-a2e78c354afa 00:29:35.202 10:50:23 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:35.460 10:50:23 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34fcce1c-a20b-45ba-aa48-4ac875935140 00:29:35.718 10:50:24 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:35.976 rmmod nvme_tcp 00:29:35.976 rmmod nvme_fabrics 00:29:35.976 rmmod nvme_keyring 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3902878 ']' 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3902878 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3902878 ']' 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3902878 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3902878 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3902878' 00:29:35.976 killing process with pid 3902878 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3902878 00:29:35.976 10:50:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3902878 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.876 10:50:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.784 10:50:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.784 00:29:39.784 real 1m31.030s 00:29:39.784 user 5m38.632s 00:29:39.784 sys 0m14.863s 00:29:39.784 10:50:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:39.784 10:50:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:39.784 ************************************ 00:29:39.784 END TEST nvmf_perf 00:29:39.784 ************************************ 00:29:39.784 10:50:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:39.784 10:50:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:39.784 10:50:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:39.784 10:50:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.784 ************************************ 00:29:39.784 START TEST nvmf_fio_host 00:29:39.784 ************************************ 00:29:39.784 10:50:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:39.784 * Looking for test storage... 00:29:39.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.784 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.785 10:50:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.162 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:41.163 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:41.163 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:41.163 Found net devices under 0000:08:00.0: cvl_0_0 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:41.163 Found net devices under 0000:08:00.1: cvl_0_1 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.163 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:29:41.421 00:29:41.421 --- 10.0.0.2 ping statistics --- 00:29:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.421 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:29:41.421 00:29:41.421 --- 10.0.0.1 ping statistics --- 00:29:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.421 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3912240 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3912240 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3912240 ']' 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:41.421 10:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.421 [2024-07-23 10:50:29.831777] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:29:41.421 [2024-07-23 10:50:29.831871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.421 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.421 [2024-07-23 10:50:29.896074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.679 [2024-07-23 10:50:29.984309] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.679 [2024-07-23 10:50:29.984374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.679 [2024-07-23 10:50:29.984389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.679 [2024-07-23 10:50:29.984402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.679 [2024-07-23 10:50:29.984414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.679 [2024-07-23 10:50:29.984508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.679 [2024-07-23 10:50:29.984601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.679 [2024-07-23 10:50:29.984604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.679 [2024-07-23 10:50:29.984538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.679 10:50:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:41.679 10:50:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:41.679 10:50:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:41.937 [2024-07-23 10:50:30.377920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.937 10:50:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:41.937 10:50:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.937 10:50:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.937 10:50:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:42.503 Malloc1 00:29:42.503 10:50:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.760 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:43.018 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.276 [2024-07-23 10:50:31.604014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.276 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:43.534 10:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.793 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:43.793 fio-3.35 00:29:43.793 Starting 1 thread 00:29:43.793 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.318 00:29:46.318 test: (groupid=0, jobs=1): err= 0: pid=3912599: Tue Jul 23 10:50:34 2024 00:29:46.318 read: IOPS=7884, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2008msec) 00:29:46.318 slat (usec): min=2, max=148, avg= 2.89, stdev= 1.74 00:29:46.318 clat (usec): min=2765, max=15306, avg=8864.14, stdev=736.62 00:29:46.318 lat (usec): min=2790, max=15309, avg=8867.03, stdev=736.49 00:29:46.318 clat percentiles (usec): 00:29:46.318 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8291], 00:29:46.318 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:29:46.318 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[ 9896], 00:29:46.318 | 99.00th=[10421], 99.50th=[10683], 99.90th=[13698], 99.95th=[14091], 00:29:46.318 | 99.99th=[15270] 00:29:46.318 bw ( KiB/s): min=30536, max=31928, per=99.96%, avg=31526.00, stdev=666.98, samples=4 00:29:46.318 iops : min= 7634, max= 7982, avg=7881.50, stdev=166.74, samples=4 00:29:46.318 write: IOPS=7857, BW=30.7MiB/s (32.2MB/s)(61.6MiB/2008msec); 0 zone resets 00:29:46.318 slat (usec): min=2, max=134, avg= 3.03, stdev= 1.28 00:29:46.318 clat (usec): min=1471, max=14024, avg=7327.25, stdev=625.54 00:29:46.318 lat (usec): min=1481, max=14027, avg=7330.29, stdev=625.44 00:29:46.318 clat percentiles (usec): 00:29:46.318 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6849], 00:29:46.318 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:29:46.318 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:29:46.318 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[12387], 99.95th=[13435], 00:29:46.318 | 99.99th=[13960] 00:29:46.318 bw ( KiB/s): min=31392, max=31504, per=100.00%, avg=31440.00, stdev=50.60, samples=4 00:29:46.318 iops : min= 7848, max= 7876, avg=7860.00, stdev=12.65, samples=4 00:29:46.318 lat (msec) : 2=0.02%, 4=0.12%, 10=97.64%, 20=2.22% 00:29:46.318 cpu : usr=69.51%, sys=28.85%, ctx=60, majf=0, minf=31 00:29:46.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:46.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.318 issued rwts: total=15832,15778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.318 00:29:46.318 Run status group 0 (all jobs): 00:29:46.318 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2008-2008msec 00:29:46.318 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.6MiB (64.6MB), run=2008-2008msec 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:46.318 10:50:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:46.318 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:46.318 fio-3.35 00:29:46.318 Starting 1 thread 00:29:46.318 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.842 00:29:48.842 test: (groupid=0, jobs=1): err= 0: pid=3912854: Tue Jul 23 10:50:36 2024 00:29:48.842 read: IOPS=7312, BW=114MiB/s (120MB/s)(229MiB/2007msec) 00:29:48.842 slat (usec): min=3, max=124, avg= 4.16, stdev= 1.75 00:29:48.842 clat (usec): min=2517, max=19281, avg=10082.08, stdev=2333.82 00:29:48.842 lat (usec): min=2522, max=19286, avg=10086.24, stdev=2333.92 00:29:48.842 clat percentiles (usec): 00:29:48.842 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 8225], 00:29:48.842 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10552], 00:29:48.842 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13042], 95.00th=[14353], 00:29:48.842 | 99.00th=[16712], 99.50th=[17433], 99.90th=[19006], 99.95th=[19006], 00:29:48.842 | 99.99th=[19268] 00:29:48.842 bw ( KiB/s): min=50336, max=67936, per=50.96%, avg=59624.00, stdev=7864.90, samples=4 00:29:48.842 iops : min= 3146, max= 4246, avg=3726.50, stdev=491.56, samples=4 00:29:48.842 write: IOPS=4288, BW=67.0MiB/s (70.3MB/s)(122MiB/1825msec); 0 zone resets 00:29:48.842 slat (usec): min=32, max=200, avg=37.49, stdev= 6.79 00:29:48.842 clat (usec): min=7479, max=22408, avg=13182.55, stdev=2201.92 00:29:48.842 lat (usec): min=7522, max=22443, avg=13220.05, stdev=2201.65 00:29:48.842 clat percentiles (usec): 00:29:48.842 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:29:48.842 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:29:48.842 | 70.00th=[14222], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:29:48.843 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21103], 99.95th=[21890], 00:29:48.843 | 99.99th=[22414] 00:29:48.843 bw ( KiB/s): min=52704, max=70496, per=90.47%, avg=62072.00, stdev=8133.35, samples=4 00:29:48.843 iops : min= 3292, max= 4408, avg=3879.50, stdev=509.79, samples=4 00:29:48.843 lat (msec) : 4=0.07%, 10=34.77%, 20=65.02%, 50=0.14% 00:29:48.843 cpu : usr=80.36%, sys=18.39%, ctx=44, majf=0, minf=51 00:29:48.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:48.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.843 issued rwts: total=14677,7826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.843 00:29:48.843 Run status group 0 (all jobs): 00:29:48.843 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=229MiB (240MB), run=2007-2007msec 00:29:48.843 WRITE: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=122MiB (128MB), run=1825-1825msec 00:29:48.843 10:50:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:29:48.843 10:50:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 -i 10.0.0.2 00:29:52.170 Nvme0n1 00:29:52.170 10:50:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=95cdb26e-731d-4bac-a395-f2ed4a590ac8 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 95cdb26e-731d-4bac-a395-f2ed4a590ac8 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=95cdb26e-731d-4bac-a395-f2ed4a590ac8 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:55.473 { 00:29:55.473 "uuid": "95cdb26e-731d-4bac-a395-f2ed4a590ac8", 00:29:55.473 "name": "lvs_0", 00:29:55.473 "base_bdev": "Nvme0n1", 00:29:55.473 "total_data_clusters": 930, 00:29:55.473 "free_clusters": 930, 00:29:55.473 "block_size": 512, 00:29:55.473 "cluster_size": 1073741824 00:29:55.473 } 00:29:55.473 ]' 00:29:55.473 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="95cdb26e-731d-4bac-a395-f2ed4a590ac8") .free_clusters' 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="95cdb26e-731d-4bac-a395-f2ed4a590ac8") .cluster_size' 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:29:55.474 952320 00:29:55.474 10:50:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:55.731 eb29a130-d91f-4487-b6a6-73d3e900ef74 00:29:55.731 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:55.988 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:56.246 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.504 10:50:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:56.762 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:56.762 fio-3.35 00:29:56.762 Starting 1 thread 00:29:56.762 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.290 00:29:59.290 test: (groupid=0, jobs=1): err= 0: pid=3913902: Tue Jul 23 10:50:47 2024 00:29:59.290 read: IOPS=5177, BW=20.2MiB/s (21.2MB/s)(41.5MiB/2050msec) 00:29:59.290 slat (usec): min=2, max=187, avg= 2.80, stdev= 2.47 00:29:59.290 clat (usec): min=807, max=171850, avg=13494.37, stdev=12767.31 00:29:59.290 lat (usec): min=812, max=171914, avg=13497.17, stdev=12767.77 00:29:59.290 clat percentiles (msec): 00:29:59.290 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:29:59.290 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:29:59.290 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:29:59.290 | 99.00th=[ 56], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:59.290 | 99.99th=[ 171] 00:29:59.290 bw ( KiB/s): min=15248, max=23128, per=100.00%, avg=21086.00, stdev=3892.64, samples=4 00:29:59.290 iops : min= 3812, max= 5782, avg=5271.50, stdev=973.16, samples=4 00:29:59.290 write: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(41.4MiB/2050msec); 0 zone resets 00:29:59.290 slat (usec): min=2, max=137, avg= 2.89, stdev= 1.58 00:29:59.290 clat (usec): min=399, max=169561, avg=11126.56, stdev=11965.78 00:29:59.290 lat (usec): min=403, max=169569, avg=11129.45, stdev=11966.20 00:29:59.290 clat percentiles (msec): 00:29:59.290 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:29:59.290 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:29:59.290 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:29:59.290 | 99.00th=[ 53], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:29:59.290 | 99.99th=[ 169] 00:29:59.290 bw ( KiB/s): min=16096, max=22856, per=100.00%, avg=21082.00, stdev=3325.10, samples=4 00:29:59.290 iops : min= 4024, max= 5714, avg=5270.50, stdev=831.28, samples=4 00:29:59.290 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:59.290 lat (msec) : 2=0.02%, 4=0.12%, 10=24.16%, 20=74.47%, 50=0.01% 00:29:59.290 lat (msec) : 100=0.59%, 250=0.60% 00:29:59.290 cpu : usr=66.57%, sys=32.21%, ctx=104, majf=0, minf=31 00:29:59.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:59.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.290 issued rwts: total=10614,10597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.290 00:29:59.290 Run status group 0 (all jobs): 00:29:59.290 READ: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=41.5MiB (43.5MB), run=2050-2050msec 00:29:59.290 WRITE: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=41.4MiB (43.4MB), run=2050-2050msec 00:29:59.290 10:50:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:59.290 10:50:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:00.663 10:50:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:00.663 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:00.663 { 00:30:00.663 "uuid": "95cdb26e-731d-4bac-a395-f2ed4a590ac8", 00:30:00.663 "name": "lvs_0", 00:30:00.663 "base_bdev": "Nvme0n1", 00:30:00.663 "total_data_clusters": 930, 00:30:00.663 "free_clusters": 0, 00:30:00.663 "block_size": 512, 00:30:00.663 "cluster_size": 1073741824 00:30:00.663 }, 00:30:00.663 { 00:30:00.663 "uuid": "e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05", 00:30:00.663 "name": "lvs_n_0", 00:30:00.663 "base_bdev": "eb29a130-d91f-4487-b6a6-73d3e900ef74", 00:30:00.663 "total_data_clusters": 237847, 00:30:00.663 "free_clusters": 237847, 00:30:00.663 "block_size": 512, 00:30:00.663 "cluster_size": 4194304 00:30:00.663 } 00:30:00.663 ]' 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05") .free_clusters' 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="e8b2fd74-62c7-4c80-8cf1-0ac09c2d8a05") .cluster_size' 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:00.664 951388 00:30:00.664 10:50:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:01.597 004fc57e-f091-44e9-b565-e34e0c6fa080 00:30:01.597 10:50:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:01.855 10:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:02.113 10:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:02.370 10:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:02.627 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:02.627 fio-3.35 00:30:02.628 Starting 1 thread 00:30:02.628 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.156 00:30:05.156 test: (groupid=0, jobs=1): err= 0: pid=3914481: Tue Jul 23 10:50:53 2024 00:30:05.156 read: IOPS=4649, BW=18.2MiB/s (19.0MB/s)(36.5MiB/2010msec) 00:30:05.156 slat (usec): min=2, max=158, avg= 2.87, stdev= 2.15 00:30:05.156 clat (usec): min=6372, max=24543, avg=15092.14, stdev=1480.74 00:30:05.156 lat (usec): min=6378, max=24546, avg=15095.01, stdev=1480.56 00:30:05.156 clat percentiles (usec): 00:30:05.156 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13304], 20.00th=[13960], 00:30:05.156 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:30:05.156 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:30:05.156 | 99.00th=[18220], 99.50th=[18744], 99.90th=[22938], 99.95th=[22938], 00:30:05.156 | 99.99th=[24511] 00:30:05.156 bw ( KiB/s): min=17784, max=19000, per=99.71%, avg=18544.00, stdev=531.42, samples=4 00:30:05.156 iops : min= 4446, max= 4750, avg=4636.00, stdev=132.86, samples=4 00:30:05.156 write: IOPS=4647, BW=18.2MiB/s (19.0MB/s)(36.5MiB/2010msec); 0 zone resets 00:30:05.156 slat (usec): min=2, max=115, avg= 3.00, stdev= 1.42 00:30:05.156 clat (usec): min=3338, max=23142, avg=12320.66, stdev=1194.12 00:30:05.156 lat (usec): min=3346, max=23145, avg=12323.66, stdev=1194.06 00:30:05.156 clat percentiles (usec): 00:30:05.156 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:30:05.156 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:30:05.156 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:30:05.156 | 99.00th=[14877], 99.50th=[15270], 99.90th=[21103], 99.95th=[22938], 00:30:05.156 | 99.99th=[23200] 00:30:05.156 bw ( KiB/s): min=18384, max=18808, per=99.92%, avg=18576.00, stdev=174.66, samples=4 00:30:05.156 iops : min= 4596, max= 4702, avg=4644.00, stdev=43.67, samples=4 00:30:05.156 lat (msec) : 4=0.04%, 10=1.09%, 20=98.67%, 50=0.20% 00:30:05.156 cpu : usr=67.55%, sys=31.16%, ctx=116, majf=0, minf=31 00:30:05.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:05.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:05.157 issued rwts: total=9345,9342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:05.157 00:30:05.157 Run status group 0 (all jobs): 00:30:05.157 READ: bw=18.2MiB/s (19.0MB/s), 18.2MiB/s-18.2MiB/s (19.0MB/s-19.0MB/s), io=36.5MiB (38.3MB), run=2010-2010msec 00:30:05.157 WRITE: bw=18.2MiB/s (19.0MB/s), 18.2MiB/s-18.2MiB/s (19.0MB/s-19.0MB/s), io=36.5MiB (38.3MB), run=2010-2010msec 00:30:05.157 10:50:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:05.415 10:50:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:05.415 10:50:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:09.596 10:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.596 10:50:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:12.877 10:51:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:12.877 10:51:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:14.774 rmmod nvme_tcp 00:30:14.774 rmmod nvme_fabrics 00:30:14.774 rmmod nvme_keyring 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3912240 ']' 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3912240 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3912240 ']' 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3912240 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3912240 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3912240' 00:30:14.774 killing process with pid 3912240 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3912240 00:30:14.774 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3912240 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.032 10:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.568 10:51:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:17.568 00:30:17.568 real 0m37.490s 00:30:17.568 user 2m25.437s 00:30:17.568 sys 0m6.340s 00:30:17.568 10:51:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:17.568 10:51:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.568 ************************************ 00:30:17.568 END TEST nvmf_fio_host 00:30:17.568 ************************************ 00:30:17.568 10:51:05 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:17.568 10:51:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:17.568 10:51:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:17.568 10:51:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.568 ************************************ 00:30:17.568 START TEST nvmf_failover 00:30:17.568 ************************************ 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:17.568 * Looking for test storage... 00:30:17.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.568 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:17.569 10:51:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:18.947 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:30:18.948 Found 0000:08:00.0 (0x8086 - 0x159b) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:30:18.948 Found 0000:08:00.1 (0x8086 - 0x159b) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:30:18.948 Found net devices under 0000:08:00.0: cvl_0_0 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:30:18.948 Found net devices under 0000:08:00.1: cvl_0_1 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:30:18.948 00:30:18.948 --- 10.0.0.2 ping statistics --- 00:30:18.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.948 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:30:18.948 00:30:18.948 --- 10.0.0.1 ping statistics --- 00:30:18.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.948 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3917169 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3917169 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3917169 ']' 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.948 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:18.949 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.949 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:18.949 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:18.949 [2024-07-23 10:51:07.434193] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:18.949 [2024-07-23 10:51:07.434291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.207 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.207 [2024-07-23 10:51:07.499425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:19.207 [2024-07-23 10:51:07.586316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.207 [2024-07-23 10:51:07.586380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.207 [2024-07-23 10:51:07.586397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.207 [2024-07-23 10:51:07.586411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.207 [2024-07-23 10:51:07.586423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.207 [2024-07-23 10:51:07.586513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.207 [2024-07-23 10:51:07.586565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.207 [2024-07-23 10:51:07.586568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.207 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:19.207 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:19.207 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:19.207 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:19.207 10:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:19.465 10:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.465 10:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:19.723 [2024-07-23 10:51:07.986714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.723 10:51:08 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:19.986 Malloc0 00:30:19.986 10:51:08 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.244 10:51:08 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.501 10:51:08 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.791 [2024-07-23 10:51:09.192753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.791 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.049 [2024-07-23 10:51:09.485615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:21.049 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:21.307 [2024-07-23 10:51:09.778571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3917889 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3917889 /var/tmp/bdevperf.sock 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3917889 ']' 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:21.307 10:51:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:21.872 10:51:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:21.872 10:51:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:21.872 10:51:10 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.129 NVMe0n1 00:30:22.129 10:51:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.695 00:30:22.695 10:51:11 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3918021 00:30:22.695 10:51:11 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:22.695 10:51:11 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:24.097 10:51:12 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.097 10:51:12 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:27.400 10:51:15 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.658 00:30:27.658 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:27.916 [2024-07-23 10:51:16.299217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 [2024-07-23 10:51:16.299710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d11e0 is same with the state(5) to be set 00:30:27.916 10:51:16 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:31.200 10:51:19 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.200 [2024-07-23 10:51:19.606893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.200 10:51:19 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:32.134 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:32.701 [2024-07-23 10:51:20.903830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.903979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 [2024-07-23 10:51:20.904011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1550 is same with the state(5) to be set 00:30:32.701 10:51:20 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3918021 00:30:37.967 0 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3917889 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3917889 ']' 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3917889 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917889 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917889' 00:30:37.967 killing process with pid 3917889 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3917889 00:30:37.967 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3917889 00:30:38.233 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:38.233 [2024-07-23 10:51:09.843719] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:38.233 [2024-07-23 10:51:09.843822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917889 ] 00:30:38.233 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.233 [2024-07-23 10:51:09.904804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.233 [2024-07-23 10:51:09.992348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.233 Running I/O for 15 seconds... 00:30:38.233 [2024-07-23 10:51:12.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.432995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.233 [2024-07-23 10:51:12.433220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.433288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.433331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.233 [2024-07-23 10:51:12.433348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.233 [2024-07-23 10:51:12.433363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.234 [2024-07-23 10:51:12.433401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.234 [2024-07-23 10:51:12.433433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.234 [2024-07-23 10:51:12.433466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.234 [2024-07-23 10:51:12.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.234 [2024-07-23 10:51:12.433539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.433967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.433984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.234 [2024-07-23 10:51:12.434651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.234 [2024-07-23 10:51:12.434666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.434858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.434890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.434923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.434955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.434976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.435024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.435058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.435111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.435162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.235 [2024-07-23 10:51:12.435214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.435969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.435991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.235 [2024-07-23 10:51:12.436380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.235 [2024-07-23 10:51:12.436401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.436966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.236 [2024-07-23 10:51:12.437467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ccc10 is same with the state(5) to be set 00:30:38.236 [2024-07-23 10:51:12.437527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.236 [2024-07-23 10:51:12.437546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.236 [2024-07-23 10:51:12.437565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71904 len:8 PRP1 0x0 PRP2 0x0 00:30:38.236 [2024-07-23 10:51:12.437586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437655] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ccc10 was disconnected and freed. reset controller. 00:30:38.236 [2024-07-23 10:51:12.437685] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:38.236 [2024-07-23 10:51:12.437733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:12.437759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:12.437802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:12.437846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:12.437889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:12.437910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:38.236 [2024-07-23 10:51:12.437970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ae360 (9): Bad file descriptor 00:30:38.236 [2024-07-23 10:51:12.442948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:38.236 [2024-07-23 10:51:12.475673] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:38.236 [2024-07-23 10:51:16.294402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:16.294476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.294504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:16.294533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.294549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:16.294563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.294579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.236 [2024-07-23 10:51:16.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.294607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ae360 is same with the state(5) to be set 00:30:38.236 [2024-07-23 10:51:16.300660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.236 [2024-07-23 10:51:16.300689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.300717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.236 [2024-07-23 10:51:16.300734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.300753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.236 [2024-07-23 10:51:16.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.236 [2024-07-23 10:51:16.300787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.236 [2024-07-23 10:51:16.300803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.300836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.300868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.300900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.300932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.300964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.300980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.237 [2024-07-23 10:51:16.301819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.237 [2024-07-23 10:51:16.301835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.301856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.301872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.301889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.301904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.301921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.301936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.301953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.301968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.301985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.302961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.302985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.238 [2024-07-23 10:51:16.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.238 [2024-07-23 10:51:16.303534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.238 [2024-07-23 10:51:16.303555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.303642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.303687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.303961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.303985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.239 [2024-07-23 10:51:16.304458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.304959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.304983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.239 [2024-07-23 10:51:16.305423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.239 [2024-07-23 10:51:16.305446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.305953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.305984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.306011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.306033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.306058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:16.306080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.306128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.240 [2024-07-23 10:51:16.306151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.240 [2024-07-23 10:51:16.306171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41904 len:8 PRP1 0x0 PRP2 0x0 00:30:38.240 [2024-07-23 10:51:16.306192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:16.306265] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1477d00 was disconnected and freed. reset controller. 00:30:38.240 [2024-07-23 10:51:16.306295] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:38.240 [2024-07-23 10:51:16.306318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:38.240 [2024-07-23 10:51:16.306389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ae360 (9): Bad file descriptor 00:30:38.240 [2024-07-23 10:51:16.311300] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:38.240 [2024-07-23 10:51:16.442899] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:38.240 [2024-07-23 10:51:20.904954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.240 [2024-07-23 10:51:20.905448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.240 [2024-07-23 10:51:20.905794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.240 [2024-07-23 10:51:20.905811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.905843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.905874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.905906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.905938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.905970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.905985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.906974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.906991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.907006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.907022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.907037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.907055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.907070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.241 [2024-07-23 10:51:20.907087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.241 [2024-07-23 10:51:20.907102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.907957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.907982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.242 [2024-07-23 10:51:20.908440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.242 [2024-07-23 10:51:20.908516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:30:38.242 [2024-07-23 10:51:20.908536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.242 [2024-07-23 10:51:20.908561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.242 [2024-07-23 10:51:20.908579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.242 [2024-07-23 10:51:20.908596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:30:38.242 [2024-07-23 10:51:20.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.908637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.908654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.908671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.908690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.908710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.908727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.908745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.908772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.908794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.908812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.908829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.908848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.908869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.908887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.908904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.908923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.908945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.908962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.908979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.908998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89752 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89768 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89776 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89784 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89792 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89800 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89808 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89816 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89824 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.909919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.909944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.909988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89840 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89848 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89856 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89864 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89872 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.243 [2024-07-23 10:51:20.910425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89880 len:8 PRP1 0x0 PRP2 0x0 00:30:38.243 [2024-07-23 10:51:20.910446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.243 [2024-07-23 10:51:20.910467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.243 [2024-07-23 10:51:20.910495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89888 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.910577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89896 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.910664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89904 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.910753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89912 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.910834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89920 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.910919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.910941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89928 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.910965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.910987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.911008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.911027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89936 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.911048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.911089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.911107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89944 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.911131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:38.244 [2024-07-23 10:51:20.911175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:38.244 [2024-07-23 10:51:20.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89952 len:8 PRP1 0x0 PRP2 0x0 00:30:38.244 [2024-07-23 10:51:20.911214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911285] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1477af0 was disconnected and freed. reset controller. 00:30:38.244 [2024-07-23 10:51:20.911315] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:38.244 [2024-07-23 10:51:20.911367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.244 [2024-07-23 10:51:20.911393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.244 [2024-07-23 10:51:20.911438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.244 [2024-07-23 10:51:20.911497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.244 [2024-07-23 10:51:20.911547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.244 [2024-07-23 10:51:20.911567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:38.244 [2024-07-23 10:51:20.911638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ae360 (9): Bad file descriptor 00:30:38.244 [2024-07-23 10:51:20.916429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:38.244 [2024-07-23 10:51:21.044394] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:38.244 00:30:38.244 Latency(us) 00:30:38.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:38.244 Verification LBA range: start 0x0 length 0x4000 00:30:38.244 NVMe0n1 : 15.05 7506.12 29.32 587.29 0.00 15740.42 631.09 43108.12 00:30:38.244 =================================================================================================================== 00:30:38.244 Total : 7506.12 29.32 587.29 0.00 15740.42 631.09 43108.12 00:30:38.244 Received shutdown signal, test time was about 15.000000 seconds 00:30:38.244 00:30:38.244 Latency(us) 00:30:38.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.244 =================================================================================================================== 00:30:38.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3919412 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3919412 /var/tmp/bdevperf.sock 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3919412 ']' 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:38.244 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:38.503 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:38.503 10:51:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:38.503 10:51:26 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.761 [2024-07-23 10:51:27.079194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:38.761 10:51:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:39.019 [2024-07-23 10:51:27.327879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:39.019 10:51:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.278 NVMe0n1 00:30:39.538 10:51:27 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.796 00:30:39.796 10:51:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.054 00:30:40.054 10:51:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:40.054 10:51:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:40.312 10:51:28 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.570 10:51:28 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:43.861 10:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:43.861 10:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:43.861 10:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3919915 00:30:43.861 10:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:43.861 10:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3919915 00:30:45.239 0 00:30:45.239 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:45.239 [2024-07-23 10:51:26.606943] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:45.239 [2024-07-23 10:51:26.607046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919412 ] 00:30:45.239 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.239 [2024-07-23 10:51:26.668017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.239 [2024-07-23 10:51:26.754769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.239 [2024-07-23 10:51:28.905612] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:45.239 [2024-07-23 10:51:28.905709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.239 [2024-07-23 10:51:28.905732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.239 [2024-07-23 10:51:28.905758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.239 [2024-07-23 10:51:28.905774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.239 [2024-07-23 10:51:28.905789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.239 [2024-07-23 10:51:28.905804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.239 [2024-07-23 10:51:28.905819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.239 [2024-07-23 10:51:28.905835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.239 [2024-07-23 10:51:28.905850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.239 [2024-07-23 10:51:28.905910] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.239 [2024-07-23 10:51:28.905944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2e360 (9): Bad file descriptor 00:30:45.239 [2024-07-23 10:51:28.966872] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:45.239 Running I/O for 1 seconds... 00:30:45.239 00:30:45.239 Latency(us) 00:30:45.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.239 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:45.239 Verification LBA range: start 0x0 length 0x4000 00:30:45.239 NVMe0n1 : 1.01 7694.19 30.06 0.00 0.00 16559.52 449.04 14369.37 00:30:45.239 =================================================================================================================== 00:30:45.239 Total : 7694.19 30.06 0.00 0.00 16559.52 449.04 14369.37 00:30:45.239 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.239 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:45.239 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.497 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.497 10:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:45.754 10:51:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.013 10:51:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3919412 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3919412 ']' 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3919412 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3919412 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3919412' 00:30:49.302 killing process with pid 3919412 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3919412 00:30:49.302 10:51:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3919412 00:30:49.561 10:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:49.561 10:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.821 rmmod nvme_tcp 00:30:49.821 rmmod nvme_fabrics 00:30:49.821 rmmod nvme_keyring 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3917169 ']' 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3917169 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3917169 ']' 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3917169 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917169 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917169' 00:30:49.821 killing process with pid 3917169 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3917169 00:30:49.821 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3917169 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.080 10:51:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.618 10:51:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.618 00:30:52.618 real 0m34.964s 00:30:52.618 user 2m4.810s 00:30:52.618 sys 0m5.604s 00:30:52.618 10:51:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.618 10:51:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:52.618 ************************************ 00:30:52.618 END TEST nvmf_failover 00:30:52.618 ************************************ 00:30:52.618 10:51:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:52.618 10:51:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:52.618 10:51:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.618 10:51:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.618 ************************************ 00:30:52.618 START TEST nvmf_host_discovery 00:30:52.618 ************************************ 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:52.618 * Looking for test storage... 00:30:52.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.618 10:51:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.619 10:51:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:30:53.999 Found 0000:08:00.0 (0x8086 - 0x159b) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:30:53.999 Found 0000:08:00.1 (0x8086 - 0x159b) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:53.999 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:30:54.000 Found net devices under 0000:08:00.0: cvl_0_0 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:30:54.000 Found net devices under 0000:08:00.1: cvl_0_1 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:54.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:30:54.000 00:30:54.000 --- 10.0.0.2 ping statistics --- 00:30:54.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.000 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:30:54.000 00:30:54.000 --- 10.0.0.1 ping statistics --- 00:30:54.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.000 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3922006 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3922006 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3922006 ']' 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:54.000 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.000 [2024-07-23 10:51:42.405448] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:54.000 [2024-07-23 10:51:42.405561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.000 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.000 [2024-07-23 10:51:42.470396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.259 [2024-07-23 10:51:42.556530] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.259 [2024-07-23 10:51:42.556596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.259 [2024-07-23 10:51:42.556612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.259 [2024-07-23 10:51:42.556632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.259 [2024-07-23 10:51:42.556644] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.259 [2024-07-23 10:51:42.556691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 [2024-07-23 10:51:42.682855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 [2024-07-23 10:51:42.691021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 null0 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 null1 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3922031 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3922031 /tmp/host.sock 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3922031 ']' 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:54.259 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:54.259 10:51:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.519 [2024-07-23 10:51:42.766787] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:54.519 [2024-07-23 10:51:42.766874] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922031 ] 00:30:54.519 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.519 [2024-07-23 10:51:42.828197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.519 [2024-07-23 10:51:42.915667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.519 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.778 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 [2024-07-23 10:51:43.312678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:30:55.038 10:51:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:55.607 [2024-07-23 10:51:44.080645] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:55.607 [2024-07-23 10:51:44.080678] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:55.607 [2024-07-23 10:51:44.080703] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:55.867 [2024-07-23 10:51:44.167983] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:55.867 [2024-07-23 10:51:44.271756] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:55.867 [2024-07-23 10:51:44.271793] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:56.129 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.420 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.421 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.706 [2024-07-23 10:51:44.917370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:56.706 [2024-07-23 10:51:44.918547] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:56.706 [2024-07-23 10:51:44.918592] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.706 10:51:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.706 [2024-07-23 10:51:45.005306] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:56.706 10:51:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:56.966 [2024-07-23 10:51:45.265556] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:56.966 [2024-07-23 10:51:45.265585] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:56.966 [2024-07-23 10:51:45.265597] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.905 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.905 [2024-07-23 10:51:46.157378] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:57.905 [2024-07-23 10:51:46.157424] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.905 [2024-07-23 10:51:46.159402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:57.905 [2024-07-23 10:51:46.159440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.905 [2024-07-23 10:51:46.159459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:57.905 [2024-07-23 10:51:46.159475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.906 [2024-07-23 10:51:46.159499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:57.906 [2024-07-23 10:51:46.159515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.906 [2024-07-23 10:51:46.159531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:57.906 [2024-07-23 10:51:46.159555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.906 [2024-07-23 10:51:46.159570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:57.906 [2024-07-23 10:51:46.169420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.906 [2024-07-23 10:51:46.179461] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.179736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.179782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.179802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.179829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.179861] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.179878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.179895] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.179919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 [2024-07-23 10:51:46.189552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.189738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.189768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.189785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.189809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.189831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.189846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.189861] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.189882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 [2024-07-23 10:51:46.199633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.199782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.199812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.199829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.199853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.199875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.199890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.199905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.199926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:57.906 [2024-07-23 10:51:46.209711] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.209868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.209898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.209917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.209944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.209966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.209982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.209998] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.210020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 [2024-07-23 10:51:46.219793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.219954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.219986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.220004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.220029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.220064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.220083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.220098] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.220120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 [2024-07-23 10:51:46.229872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.230096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.230125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.230142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.230166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.230216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.906 [2024-07-23 10:51:46.230236] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.906 [2024-07-23 10:51:46.230251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.906 [2024-07-23 10:51:46.230273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.906 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.906 [2024-07-23 10:51:46.239948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.906 [2024-07-23 10:51:46.240110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.906 [2024-07-23 10:51:46.240140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6a00 with addr=10.0.0.2, port=4420 00:30:57.906 [2024-07-23 10:51:46.240158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c6a00 is same with the state(5) to be set 00:30:57.906 [2024-07-23 10:51:46.240182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c6a00 (9): Bad file descriptor 00:30:57.906 [2024-07-23 10:51:46.240204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.907 [2024-07-23 10:51:46.240219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.907 [2024-07-23 10:51:46.240234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.907 [2024-07-23 10:51:46.240255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.907 [2024-07-23 10:51:46.245310] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:57.907 [2024-07-23 10:51:46.245343] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:57.907 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.166 10:51:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.100 [2024-07-23 10:51:47.539280] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:59.100 [2024-07-23 10:51:47.539322] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:59.100 [2024-07-23 10:51:47.539346] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:59.357 [2024-07-23 10:51:47.666725] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:59.616 [2024-07-23 10:51:47.936439] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:59.616 [2024-07-23 10:51:47.936513] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.616 request: 00:30:59.616 { 00:30:59.616 "name": "nvme", 00:30:59.616 "trtype": "tcp", 00:30:59.616 "traddr": "10.0.0.2", 00:30:59.616 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:59.616 "adrfam": "ipv4", 00:30:59.616 "trsvcid": "8009", 00:30:59.616 "wait_for_attach": true, 00:30:59.616 "method": "bdev_nvme_start_discovery", 00:30:59.616 "req_id": 1 00:30:59.616 } 00:30:59.616 Got JSON-RPC error response 00:30:59.616 response: 00:30:59.616 { 00:30:59.616 "code": -17, 00:30:59.616 "message": "File exists" 00:30:59.616 } 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:59.616 10:51:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.616 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.616 request: 00:30:59.616 { 00:30:59.616 "name": "nvme_second", 00:30:59.617 "trtype": "tcp", 00:30:59.617 "traddr": "10.0.0.2", 00:30:59.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:59.617 "adrfam": "ipv4", 00:30:59.617 "trsvcid": "8009", 00:30:59.617 "wait_for_attach": true, 00:30:59.617 "method": "bdev_nvme_start_discovery", 00:30:59.617 "req_id": 1 00:30:59.617 } 00:30:59.617 Got JSON-RPC error response 00:30:59.617 response: 00:30:59.617 { 00:30:59.617 "code": -17, 00:30:59.617 "message": "File exists" 00:30:59.617 } 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:59.617 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.875 10:51:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.815 [2024-07-23 10:51:49.151931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.815 [2024-07-23 10:51:49.152004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f3700 with addr=10.0.0.2, port=8010 00:31:00.815 [2024-07-23 10:51:49.152032] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:00.815 [2024-07-23 10:51:49.152048] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:00.815 [2024-07-23 10:51:49.152073] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:01.756 [2024-07-23 10:51:50.154430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:01.756 [2024-07-23 10:51:50.154509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f3700 with addr=10.0.0.2, port=8010 00:31:01.756 [2024-07-23 10:51:50.154538] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:01.756 [2024-07-23 10:51:50.154555] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:01.756 [2024-07-23 10:51:50.154579] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:02.696 [2024-07-23 10:51:51.156570] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:02.696 request: 00:31:02.696 { 00:31:02.696 "name": "nvme_second", 00:31:02.696 "trtype": "tcp", 00:31:02.696 "traddr": "10.0.0.2", 00:31:02.696 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:02.696 "adrfam": "ipv4", 00:31:02.696 "trsvcid": "8010", 00:31:02.696 "attach_timeout_ms": 3000, 00:31:02.696 "method": "bdev_nvme_start_discovery", 00:31:02.696 "req_id": 1 00:31:02.696 } 00:31:02.696 Got JSON-RPC error response 00:31:02.696 response: 00:31:02.696 { 00:31:02.696 "code": -110, 00:31:02.696 "message": "Connection timed out" 00:31:02.696 } 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:02.696 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3922031 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.072 rmmod nvme_tcp 00:31:03.072 rmmod nvme_fabrics 00:31:03.072 rmmod nvme_keyring 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3922006 ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3922006 ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3922006' 00:31:03.072 killing process with pid 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3922006 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.072 10:51:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.605 00:31:05.605 real 0m12.954s 00:31:05.605 user 0m19.405s 00:31:05.605 sys 0m2.537s 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.605 ************************************ 00:31:05.605 END TEST nvmf_host_discovery 00:31:05.605 ************************************ 00:31:05.605 10:51:53 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:05.605 10:51:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:05.605 10:51:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:05.605 10:51:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.605 ************************************ 00:31:05.605 START TEST nvmf_host_multipath_status 00:31:05.605 ************************************ 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:05.605 * Looking for test storage... 00:31:05.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.605 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.606 10:51:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:06.986 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:06.986 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:06.986 Found net devices under 0000:08:00.0: cvl_0_0 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:06.986 Found net devices under 0000:08:00.1: cvl_0_1 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.986 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:06.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:31:06.987 00:31:06.987 --- 10.0.0.2 ping statistics --- 00:31:06.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.987 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:31:06.987 00:31:06.987 --- 10.0.0.1 ping statistics --- 00:31:06.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.987 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3924405 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3924405 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3924405 ']' 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:06.987 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.987 [2024-07-23 10:51:55.462984] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:06.987 [2024-07-23 10:51:55.463082] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.245 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.245 [2024-07-23 10:51:55.528592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:07.245 [2024-07-23 10:51:55.615561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.245 [2024-07-23 10:51:55.615627] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.245 [2024-07-23 10:51:55.615643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.245 [2024-07-23 10:51:55.615656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.245 [2024-07-23 10:51:55.615667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.245 [2024-07-23 10:51:55.615762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.245 [2024-07-23 10:51:55.615794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3924405 00:31:07.245 10:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:07.812 [2024-07-23 10:51:56.007377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.812 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:08.071 Malloc0 00:31:08.071 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:08.329 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.587 10:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.845 [2024-07-23 10:51:57.217534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.845 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:09.104 [2024-07-23 10:51:57.498260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3924625 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3924625 /var/tmp/bdevperf.sock 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3924625 ']' 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:09.104 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:09.674 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:09.674 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:09.675 10:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:09.935 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:10.194 Nvme0n1 00:31:10.194 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:10.763 Nvme0n1 00:31:10.763 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:10.763 10:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:12.671 10:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:12.671 10:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:12.930 10:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:13.190 10:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:14.129 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:14.129 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.129 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.129 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.694 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.694 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:14.694 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.694 10:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.694 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.694 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.694 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.694 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.955 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.213 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.213 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.213 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.213 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.471 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.471 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.471 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.471 10:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.729 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.729 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:15.729 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:15.987 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:16.248 10:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:17.187 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:17.187 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:17.187 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.187 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.445 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.445 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:17.445 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.445 10:52:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.704 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.704 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.704 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.704 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:18.271 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.271 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.272 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.530 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.530 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:18.530 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.530 10:52:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.788 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.788 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:18.788 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:19.046 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:19.306 10:52:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:20.245 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:20.245 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:20.245 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.245 10:52:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.812 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.812 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:20.812 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.812 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:21.071 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.071 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:21.071 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.071 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.330 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.330 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.330 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.330 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.587 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.587 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:21.587 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.587 10:52:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.855 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.855 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.855 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.855 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.139 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.139 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:22.139 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:22.400 10:52:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:22.967 10:52:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:23.904 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:23.904 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:23.904 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.905 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.162 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.162 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:24.162 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.162 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.421 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.421 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.421 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.421 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.679 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.679 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.679 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.679 10:52:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.938 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.938 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:24.938 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.938 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.197 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.197 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:25.197 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.197 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:25.455 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.455 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:25.455 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:25.714 10:52:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:25.973 10:52:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:26.912 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:26.912 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:26.912 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.912 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.170 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.170 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:27.170 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.170 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.429 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.429 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.429 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.429 10:52:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.687 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.687 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.687 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.687 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:27.945 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.945 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:27.945 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.945 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.511 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.511 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:28.511 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.511 10:52:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.511 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.511 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:28.511 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:29.078 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:29.338 10:52:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:30.276 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:30.276 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:30.276 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.276 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.535 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.535 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.535 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.535 10:52:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.793 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.793 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.793 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.793 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.051 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.051 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.051 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.051 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.621 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.621 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:31.621 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.621 10:52:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.880 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.880 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.880 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.880 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.138 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.138 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:32.396 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:32.396 10:52:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:32.655 10:52:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:32.913 10:52:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:33.847 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:33.847 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.847 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.847 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.416 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.416 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:34.416 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.416 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.674 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.674 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.674 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.674 10:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.932 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.932 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.932 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.932 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.189 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.189 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.189 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.189 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.446 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.446 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.446 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.446 10:52:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.704 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.704 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:35.704 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.962 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:36.221 10:52:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:37.598 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:37.598 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:37.598 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.598 10:52:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.598 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.598 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:37.598 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.598 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.857 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.857 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.857 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.857 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.117 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.117 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.375 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.375 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.633 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.633 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.633 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.633 10:52:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.891 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.891 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.891 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.891 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.149 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.149 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:39.149 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:39.406 10:52:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:39.666 10:52:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:40.604 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:40.604 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.604 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.604 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.861 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.861 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:40.861 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.861 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.118 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.118 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.118 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.118 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.376 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.376 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.376 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.376 10:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.633 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.633 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.633 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.633 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.890 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.890 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.890 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.890 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.147 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.147 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:42.147 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:42.405 10:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:42.663 10:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:43.600 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:43.600 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.600 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.600 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.859 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.859 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.859 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.859 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.424 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.424 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.424 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.424 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.683 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.683 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.683 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.683 10:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.941 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.941 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:44.941 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.941 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.200 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.200 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:45.200 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.200 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.460 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3924625 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3924625 ']' 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3924625 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924625 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924625' 00:31:45.461 killing process with pid 3924625 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3924625 00:31:45.461 10:52:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3924625 00:31:45.723 Connection closed with partial response: 00:31:45.723 00:31:45.723 00:31:45.723 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3924625 00:31:45.723 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:45.723 [2024-07-23 10:51:57.554545] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:45.723 [2024-07-23 10:51:57.554622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924625 ] 00:31:45.723 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.723 [2024-07-23 10:51:57.623705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.723 [2024-07-23 10:51:57.724746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.723 Running I/O for 90 seconds... 00:31:45.723 [2024-07-23 10:52:13.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.954969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.954994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.955010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.723 [2024-07-23 10:52:13.955051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.723 [2024-07-23 10:52:13.955092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.723 [2024-07-23 10:52:13.955134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.723 [2024-07-23 10:52:13.955176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.723 [2024-07-23 10:52:13.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.723 [2024-07-23 10:52:13.955258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:45.723 [2024-07-23 10:52:13.955283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.955978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.955996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.724 [2024-07-23 10:52:13.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.956816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.956862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.956908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.956982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.956999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.957027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.957045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.957073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.957090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.957118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.957136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.957170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.724 [2024-07-23 10:52:13.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:45.724 [2024-07-23 10:52:13.957216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.957966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.957983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:45.725 [2024-07-23 10:52:13.958986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.725 [2024-07-23 10:52:13.959003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.959950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.959984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.960357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.960408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:13.960458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:13.960871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:13.960888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.044092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.726 [2024-07-23 10:52:31.044167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.046128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:31.046157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.046190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:31.046210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.046236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:31.046254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.046278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:31.046295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:45.726 [2024-07-23 10:52:31.046332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.726 [2024-07-23 10:52:31.046350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.046966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.046983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.727 [2024-07-23 10:52:31.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.727 [2024-07-23 10:52:31.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.727 [2024-07-23 10:52:31.047148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.047967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.047984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.048009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.727 [2024-07-23 10:52:31.048026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:45.727 [2024-07-23 10:52:31.048050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.728 [2024-07-23 10:52:31.048579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.728 [2024-07-23 10:52:31.048621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.728 [2024-07-23 10:52:31.048662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.728 [2024-07-23 10:52:31.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:45.728 [2024-07-23 10:52:31.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.728 [2024-07-23 10:52:31.048834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:45.728 Received shutdown signal, test time was about 34.702343 seconds 00:31:45.728 00:31:45.728 Latency(us) 00:31:45.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.728 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:45.728 Verification LBA range: start 0x0 length 0x4000 00:31:45.728 Nvme0n1 : 34.70 7335.43 28.65 0.00 0.00 17415.79 251.83 4026531.84 00:31:45.728 =================================================================================================================== 00:31:45.728 Total : 7335.43 28.65 0.00 0.00 17415.79 251.83 4026531.84 00:31:45.728 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:45.987 rmmod nvme_tcp 00:31:45.987 rmmod nvme_fabrics 00:31:45.987 rmmod nvme_keyring 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3924405 ']' 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3924405 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3924405 ']' 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3924405 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924405 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924405' 00:31:45.987 killing process with pid 3924405 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3924405 00:31:45.987 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3924405 00:31:46.245 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:46.245 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:46.245 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:46.245 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:46.245 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:46.246 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.246 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:46.246 10:52:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.783 10:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:48.783 00:31:48.783 real 0m43.120s 00:31:48.783 user 2m12.316s 00:31:48.783 sys 0m10.755s 00:31:48.783 10:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:48.783 10:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.783 ************************************ 00:31:48.783 END TEST nvmf_host_multipath_status 00:31:48.783 ************************************ 00:31:48.783 10:52:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:48.783 10:52:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:48.783 10:52:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:48.783 10:52:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.783 ************************************ 00:31:48.783 START TEST nvmf_discovery_remove_ifc 00:31:48.783 ************************************ 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:48.783 * Looking for test storage... 00:31:48.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:48.783 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:48.784 10:52:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:50.163 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:50.163 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:50.163 Found net devices under 0000:08:00.0: cvl_0_0 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:50.163 Found net devices under 0000:08:00.1: cvl_0_1 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.163 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:50.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:31:50.164 00:31:50.164 --- 10.0.0.2 ping statistics --- 00:31:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.164 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:31:50.164 00:31:50.164 --- 10.0.0.1 ping statistics --- 00:31:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.164 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3929673 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3929673 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3929673 ']' 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:50.164 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.164 [2024-07-23 10:52:38.523718] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:50.164 [2024-07-23 10:52:38.523812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.164 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.164 [2024-07-23 10:52:38.589344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.423 [2024-07-23 10:52:38.675616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.423 [2024-07-23 10:52:38.675670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.423 [2024-07-23 10:52:38.675686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.423 [2024-07-23 10:52:38.675700] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.423 [2024-07-23 10:52:38.675712] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.423 [2024-07-23 10:52:38.675752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.423 [2024-07-23 10:52:38.813691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.423 [2024-07-23 10:52:38.821889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:50.423 null0 00:31:50.423 [2024-07-23 10:52:38.853814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3929694 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3929694 /tmp/host.sock 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3929694 ']' 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:50.423 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:50.424 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:50.424 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:50.424 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:50.424 10:52:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.424 [2024-07-23 10:52:38.923544] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:50.424 [2024-07-23 10:52:38.923637] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929694 ] 00:31:50.684 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.684 [2024-07-23 10:52:38.985193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.684 [2024-07-23 10:52:39.072625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.684 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.944 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.944 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:50.944 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.944 10:52:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.882 [2024-07-23 10:52:40.321128] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:51.882 [2024-07-23 10:52:40.321181] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:51.882 [2024-07-23 10:52:40.321205] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:52.143 [2024-07-23 10:52:40.450618] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:52.143 [2024-07-23 10:52:40.512261] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:52.143 [2024-07-23 10:52:40.512338] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:52.143 [2024-07-23 10:52:40.512380] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:52.143 [2024-07-23 10:52:40.512411] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:52.143 [2024-07-23 10:52:40.512451] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.143 [2024-07-23 10:52:40.519858] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12b3530 was disconnected and freed. delete nvme_qpair. 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.143 10:52:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:53.525 10:52:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:54.463 10:52:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:55.404 10:52:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:56.346 10:52:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.731 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:57.732 10:52:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.732 [2024-07-23 10:52:45.953108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:57.732 [2024-07-23 10:52:45.953189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.732 [2024-07-23 10:52:45.953212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.732 [2024-07-23 10:52:45.953231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.732 [2024-07-23 10:52:45.953246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.732 [2024-07-23 10:52:45.953262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.732 [2024-07-23 10:52:45.953276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.732 [2024-07-23 10:52:45.953291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.732 [2024-07-23 10:52:45.953306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.732 [2024-07-23 10:52:45.953321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.732 [2024-07-23 10:52:45.953336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.732 [2024-07-23 10:52:45.953350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a560 is same with the state(5) to be set 00:31:57.732 [2024-07-23 10:52:45.963137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a560 (9): Bad file descriptor 00:31:57.732 [2024-07-23 10:52:45.973168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.672 10:52:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.672 [2024-07-23 10:52:46.995526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:58.672 [2024-07-23 10:52:46.995598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a560 with addr=10.0.0.2, port=4420 00:31:58.672 [2024-07-23 10:52:46.995621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a560 is same with the state(5) to be set 00:31:58.672 [2024-07-23 10:52:46.995675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a560 (9): Bad file descriptor 00:31:58.672 [2024-07-23 10:52:46.996105] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:58.672 [2024-07-23 10:52:46.996132] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:58.672 [2024-07-23 10:52:46.996146] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:58.672 [2024-07-23 10:52:46.996159] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:58.672 [2024-07-23 10:52:46.996187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:58.672 [2024-07-23 10:52:46.996203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:58.672 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.672 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.672 10:52:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.609 [2024-07-23 10:52:47.998702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:59.609 [2024-07-23 10:52:47.998772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:59.609 [2024-07-23 10:52:47.998786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:59.609 [2024-07-23 10:52:47.998799] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:59.609 [2024-07-23 10:52:47.998829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:59.609 [2024-07-23 10:52:47.998873] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:59.609 [2024-07-23 10:52:47.998937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.609 [2024-07-23 10:52:47.998956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.609 [2024-07-23 10:52:47.998973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.609 [2024-07-23 10:52:47.998985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.609 [2024-07-23 10:52:47.998997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.609 [2024-07-23 10:52:47.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.609 [2024-07-23 10:52:47.999023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.609 [2024-07-23 10:52:47.999035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.609 [2024-07-23 10:52:47.999048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.609 [2024-07-23 10:52:47.999060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.609 [2024-07-23 10:52:47.999072] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:59.609 [2024-07-23 10:52:47.999122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12799f0 (9): Bad file descriptor 00:31:59.609 [2024-07-23 10:52:48.000108] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:59.609 [2024-07-23 10:52:48.000137] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.609 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.868 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:59.868 10:52:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.806 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.806 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.806 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.806 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:00.807 10:52:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.745 [2024-07-23 10:52:50.058419] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:01.745 [2024-07-23 10:52:50.058473] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:01.745 [2024-07-23 10:52:50.058505] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.745 [2024-07-23 10:52:50.185894] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:01.745 10:52:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.004 [2024-07-23 10:52:50.368020] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:02.004 [2024-07-23 10:52:50.368073] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:02.004 [2024-07-23 10:52:50.368108] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:02.004 [2024-07-23 10:52:50.368134] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:02.004 [2024-07-23 10:52:50.368149] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:02.004 [2024-07-23 10:52:50.375870] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12936f0 was disconnected and freed. delete nvme_qpair. 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3929694 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3929694 ']' 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3929694 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3929694 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3929694' 00:32:02.943 killing process with pid 3929694 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3929694 00:32:02.943 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3929694 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:03.203 rmmod nvme_tcp 00:32:03.203 rmmod nvme_fabrics 00:32:03.203 rmmod nvme_keyring 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:03.203 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3929673 ']' 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3929673 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3929673 ']' 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3929673 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3929673 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3929673' 00:32:03.204 killing process with pid 3929673 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3929673 00:32:03.204 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3929673 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.464 10:52:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.374 10:52:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:05.374 00:32:05.374 real 0m17.041s 00:32:05.374 user 0m25.295s 00:32:05.374 sys 0m2.663s 00:32:05.374 10:52:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:05.374 10:52:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.374 ************************************ 00:32:05.374 END TEST nvmf_discovery_remove_ifc 00:32:05.374 ************************************ 00:32:05.374 10:52:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:05.374 10:52:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:05.374 10:52:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:05.374 10:52:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:05.374 ************************************ 00:32:05.374 START TEST nvmf_identify_kernel_target 00:32:05.374 ************************************ 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:05.374 * Looking for test storage... 00:32:05.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:05.374 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:05.652 10:52:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.054 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:07.055 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:07.055 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:07.055 Found net devices under 0000:08:00.0: cvl_0_0 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:07.055 Found net devices under 0000:08:00.1: cvl_0_1 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:32:07.055 00:32:07.055 --- 10.0.0.2 ping statistics --- 00:32:07.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.055 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:07.055 00:32:07.055 --- 10.0.0.1 ping statistics --- 00:32:07.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.055 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.055 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:07.314 10:52:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.253 Waiting for block devices as requested 00:32:08.253 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.253 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:08.253 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:08.513 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:08.513 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:08.513 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:08.513 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:08.773 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:08.773 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:08.773 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:08.773 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:09.032 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:09.032 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:09.032 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:09.292 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:09.292 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:09.292 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:09.292 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:09.553 No valid GPT data, bailing 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:32:09.553 00:32:09.553 Discovery Log Number of Records 2, Generation counter 2 00:32:09.553 =====Discovery Log Entry 0====== 00:32:09.553 trtype: tcp 00:32:09.553 adrfam: ipv4 00:32:09.553 subtype: current discovery subsystem 00:32:09.553 treq: not specified, sq flow control disable supported 00:32:09.553 portid: 1 00:32:09.553 trsvcid: 4420 00:32:09.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:09.553 traddr: 10.0.0.1 00:32:09.553 eflags: none 00:32:09.553 sectype: none 00:32:09.553 =====Discovery Log Entry 1====== 00:32:09.553 trtype: tcp 00:32:09.553 adrfam: ipv4 00:32:09.553 subtype: nvme subsystem 00:32:09.553 treq: not specified, sq flow control disable supported 00:32:09.553 portid: 1 00:32:09.553 trsvcid: 4420 00:32:09.553 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:09.553 traddr: 10.0.0.1 00:32:09.553 eflags: none 00:32:09.553 sectype: none 00:32:09.553 10:52:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:09.553 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:09.553 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.553 ===================================================== 00:32:09.553 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:09.553 ===================================================== 00:32:09.553 Controller Capabilities/Features 00:32:09.553 ================================ 00:32:09.553 Vendor ID: 0000 00:32:09.553 Subsystem Vendor ID: 0000 00:32:09.553 Serial Number: da8a833149e65f571cdf 00:32:09.553 Model Number: Linux 00:32:09.553 Firmware Version: 6.7.0-68 00:32:09.553 Recommended Arb Burst: 0 00:32:09.553 IEEE OUI Identifier: 00 00 00 00:32:09.553 Multi-path I/O 00:32:09.553 May have multiple subsystem ports: No 00:32:09.553 May have multiple controllers: No 00:32:09.553 Associated with SR-IOV VF: No 00:32:09.553 Max Data Transfer Size: Unlimited 00:32:09.553 Max Number of Namespaces: 0 00:32:09.553 Max Number of I/O Queues: 1024 00:32:09.553 NVMe Specification Version (VS): 1.3 00:32:09.553 NVMe Specification Version (Identify): 1.3 00:32:09.553 Maximum Queue Entries: 1024 00:32:09.553 Contiguous Queues Required: No 00:32:09.553 Arbitration Mechanisms Supported 00:32:09.553 Weighted Round Robin: Not Supported 00:32:09.553 Vendor Specific: Not Supported 00:32:09.553 Reset Timeout: 7500 ms 00:32:09.553 Doorbell Stride: 4 bytes 00:32:09.553 NVM Subsystem Reset: Not Supported 00:32:09.553 Command Sets Supported 00:32:09.553 NVM Command Set: Supported 00:32:09.553 Boot Partition: Not Supported 00:32:09.553 Memory Page Size Minimum: 4096 bytes 00:32:09.553 Memory Page Size Maximum: 4096 bytes 00:32:09.553 Persistent Memory Region: Not Supported 00:32:09.553 Optional Asynchronous Events Supported 00:32:09.553 Namespace Attribute Notices: Not Supported 00:32:09.553 Firmware Activation Notices: Not Supported 00:32:09.553 ANA Change Notices: Not Supported 00:32:09.553 PLE Aggregate Log Change Notices: Not Supported 00:32:09.553 LBA Status Info Alert Notices: Not Supported 00:32:09.553 EGE Aggregate Log Change Notices: Not Supported 00:32:09.553 Normal NVM Subsystem Shutdown event: Not Supported 00:32:09.553 Zone Descriptor Change Notices: Not Supported 00:32:09.553 Discovery Log Change Notices: Supported 00:32:09.553 Controller Attributes 00:32:09.553 128-bit Host Identifier: Not Supported 00:32:09.553 Non-Operational Permissive Mode: Not Supported 00:32:09.553 NVM Sets: Not Supported 00:32:09.553 Read Recovery Levels: Not Supported 00:32:09.553 Endurance Groups: Not Supported 00:32:09.553 Predictable Latency Mode: Not Supported 00:32:09.553 Traffic Based Keep ALive: Not Supported 00:32:09.553 Namespace Granularity: Not Supported 00:32:09.553 SQ Associations: Not Supported 00:32:09.553 UUID List: Not Supported 00:32:09.553 Multi-Domain Subsystem: Not Supported 00:32:09.553 Fixed Capacity Management: Not Supported 00:32:09.553 Variable Capacity Management: Not Supported 00:32:09.553 Delete Endurance Group: Not Supported 00:32:09.553 Delete NVM Set: Not Supported 00:32:09.553 Extended LBA Formats Supported: Not Supported 00:32:09.553 Flexible Data Placement Supported: Not Supported 00:32:09.553 00:32:09.553 Controller Memory Buffer Support 00:32:09.553 ================================ 00:32:09.553 Supported: No 00:32:09.553 00:32:09.553 Persistent Memory Region Support 00:32:09.553 ================================ 00:32:09.553 Supported: No 00:32:09.553 00:32:09.553 Admin Command Set Attributes 00:32:09.553 ============================ 00:32:09.553 Security Send/Receive: Not Supported 00:32:09.553 Format NVM: Not Supported 00:32:09.553 Firmware Activate/Download: Not Supported 00:32:09.553 Namespace Management: Not Supported 00:32:09.553 Device Self-Test: Not Supported 00:32:09.553 Directives: Not Supported 00:32:09.553 NVMe-MI: Not Supported 00:32:09.553 Virtualization Management: Not Supported 00:32:09.553 Doorbell Buffer Config: Not Supported 00:32:09.553 Get LBA Status Capability: Not Supported 00:32:09.553 Command & Feature Lockdown Capability: Not Supported 00:32:09.553 Abort Command Limit: 1 00:32:09.553 Async Event Request Limit: 1 00:32:09.553 Number of Firmware Slots: N/A 00:32:09.553 Firmware Slot 1 Read-Only: N/A 00:32:09.553 Firmware Activation Without Reset: N/A 00:32:09.553 Multiple Update Detection Support: N/A 00:32:09.553 Firmware Update Granularity: No Information Provided 00:32:09.553 Per-Namespace SMART Log: No 00:32:09.553 Asymmetric Namespace Access Log Page: Not Supported 00:32:09.553 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:09.553 Command Effects Log Page: Not Supported 00:32:09.553 Get Log Page Extended Data: Supported 00:32:09.553 Telemetry Log Pages: Not Supported 00:32:09.554 Persistent Event Log Pages: Not Supported 00:32:09.554 Supported Log Pages Log Page: May Support 00:32:09.554 Commands Supported & Effects Log Page: Not Supported 00:32:09.554 Feature Identifiers & Effects Log Page:May Support 00:32:09.554 NVMe-MI Commands & Effects Log Page: May Support 00:32:09.554 Data Area 4 for Telemetry Log: Not Supported 00:32:09.554 Error Log Page Entries Supported: 1 00:32:09.554 Keep Alive: Not Supported 00:32:09.554 00:32:09.554 NVM Command Set Attributes 00:32:09.554 ========================== 00:32:09.554 Submission Queue Entry Size 00:32:09.554 Max: 1 00:32:09.554 Min: 1 00:32:09.554 Completion Queue Entry Size 00:32:09.554 Max: 1 00:32:09.554 Min: 1 00:32:09.554 Number of Namespaces: 0 00:32:09.554 Compare Command: Not Supported 00:32:09.554 Write Uncorrectable Command: Not Supported 00:32:09.554 Dataset Management Command: Not Supported 00:32:09.554 Write Zeroes Command: Not Supported 00:32:09.554 Set Features Save Field: Not Supported 00:32:09.554 Reservations: Not Supported 00:32:09.554 Timestamp: Not Supported 00:32:09.554 Copy: Not Supported 00:32:09.554 Volatile Write Cache: Not Present 00:32:09.554 Atomic Write Unit (Normal): 1 00:32:09.554 Atomic Write Unit (PFail): 1 00:32:09.554 Atomic Compare & Write Unit: 1 00:32:09.554 Fused Compare & Write: Not Supported 00:32:09.554 Scatter-Gather List 00:32:09.554 SGL Command Set: Supported 00:32:09.554 SGL Keyed: Not Supported 00:32:09.554 SGL Bit Bucket Descriptor: Not Supported 00:32:09.554 SGL Metadata Pointer: Not Supported 00:32:09.554 Oversized SGL: Not Supported 00:32:09.554 SGL Metadata Address: Not Supported 00:32:09.554 SGL Offset: Supported 00:32:09.554 Transport SGL Data Block: Not Supported 00:32:09.554 Replay Protected Memory Block: Not Supported 00:32:09.554 00:32:09.554 Firmware Slot Information 00:32:09.554 ========================= 00:32:09.554 Active slot: 0 00:32:09.554 00:32:09.554 00:32:09.554 Error Log 00:32:09.554 ========= 00:32:09.554 00:32:09.554 Active Namespaces 00:32:09.554 ================= 00:32:09.554 Discovery Log Page 00:32:09.554 ================== 00:32:09.554 Generation Counter: 2 00:32:09.554 Number of Records: 2 00:32:09.554 Record Format: 0 00:32:09.554 00:32:09.554 Discovery Log Entry 0 00:32:09.554 ---------------------- 00:32:09.554 Transport Type: 3 (TCP) 00:32:09.554 Address Family: 1 (IPv4) 00:32:09.554 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:09.554 Entry Flags: 00:32:09.554 Duplicate Returned Information: 0 00:32:09.554 Explicit Persistent Connection Support for Discovery: 0 00:32:09.554 Transport Requirements: 00:32:09.554 Secure Channel: Not Specified 00:32:09.554 Port ID: 1 (0x0001) 00:32:09.554 Controller ID: 65535 (0xffff) 00:32:09.554 Admin Max SQ Size: 32 00:32:09.554 Transport Service Identifier: 4420 00:32:09.554 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:09.554 Transport Address: 10.0.0.1 00:32:09.554 Discovery Log Entry 1 00:32:09.554 ---------------------- 00:32:09.554 Transport Type: 3 (TCP) 00:32:09.554 Address Family: 1 (IPv4) 00:32:09.554 Subsystem Type: 2 (NVM Subsystem) 00:32:09.554 Entry Flags: 00:32:09.554 Duplicate Returned Information: 0 00:32:09.554 Explicit Persistent Connection Support for Discovery: 0 00:32:09.554 Transport Requirements: 00:32:09.554 Secure Channel: Not Specified 00:32:09.554 Port ID: 1 (0x0001) 00:32:09.554 Controller ID: 65535 (0xffff) 00:32:09.554 Admin Max SQ Size: 32 00:32:09.554 Transport Service Identifier: 4420 00:32:09.554 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:09.554 Transport Address: 10.0.0.1 00:32:09.554 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:09.815 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.815 get_feature(0x01) failed 00:32:09.815 get_feature(0x02) failed 00:32:09.815 get_feature(0x04) failed 00:32:09.815 ===================================================== 00:32:09.815 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:09.815 ===================================================== 00:32:09.815 Controller Capabilities/Features 00:32:09.815 ================================ 00:32:09.815 Vendor ID: 0000 00:32:09.815 Subsystem Vendor ID: 0000 00:32:09.815 Serial Number: 1bcd9d8e39e02815c00c 00:32:09.815 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:09.815 Firmware Version: 6.7.0-68 00:32:09.815 Recommended Arb Burst: 6 00:32:09.815 IEEE OUI Identifier: 00 00 00 00:32:09.815 Multi-path I/O 00:32:09.815 May have multiple subsystem ports: Yes 00:32:09.815 May have multiple controllers: Yes 00:32:09.815 Associated with SR-IOV VF: No 00:32:09.815 Max Data Transfer Size: Unlimited 00:32:09.815 Max Number of Namespaces: 1024 00:32:09.815 Max Number of I/O Queues: 128 00:32:09.815 NVMe Specification Version (VS): 1.3 00:32:09.815 NVMe Specification Version (Identify): 1.3 00:32:09.815 Maximum Queue Entries: 1024 00:32:09.815 Contiguous Queues Required: No 00:32:09.815 Arbitration Mechanisms Supported 00:32:09.815 Weighted Round Robin: Not Supported 00:32:09.815 Vendor Specific: Not Supported 00:32:09.815 Reset Timeout: 7500 ms 00:32:09.815 Doorbell Stride: 4 bytes 00:32:09.815 NVM Subsystem Reset: Not Supported 00:32:09.815 Command Sets Supported 00:32:09.815 NVM Command Set: Supported 00:32:09.815 Boot Partition: Not Supported 00:32:09.815 Memory Page Size Minimum: 4096 bytes 00:32:09.815 Memory Page Size Maximum: 4096 bytes 00:32:09.815 Persistent Memory Region: Not Supported 00:32:09.815 Optional Asynchronous Events Supported 00:32:09.815 Namespace Attribute Notices: Supported 00:32:09.815 Firmware Activation Notices: Not Supported 00:32:09.815 ANA Change Notices: Supported 00:32:09.815 PLE Aggregate Log Change Notices: Not Supported 00:32:09.815 LBA Status Info Alert Notices: Not Supported 00:32:09.815 EGE Aggregate Log Change Notices: Not Supported 00:32:09.815 Normal NVM Subsystem Shutdown event: Not Supported 00:32:09.815 Zone Descriptor Change Notices: Not Supported 00:32:09.815 Discovery Log Change Notices: Not Supported 00:32:09.815 Controller Attributes 00:32:09.815 128-bit Host Identifier: Supported 00:32:09.815 Non-Operational Permissive Mode: Not Supported 00:32:09.815 NVM Sets: Not Supported 00:32:09.815 Read Recovery Levels: Not Supported 00:32:09.815 Endurance Groups: Not Supported 00:32:09.815 Predictable Latency Mode: Not Supported 00:32:09.815 Traffic Based Keep ALive: Supported 00:32:09.815 Namespace Granularity: Not Supported 00:32:09.815 SQ Associations: Not Supported 00:32:09.815 UUID List: Not Supported 00:32:09.815 Multi-Domain Subsystem: Not Supported 00:32:09.815 Fixed Capacity Management: Not Supported 00:32:09.815 Variable Capacity Management: Not Supported 00:32:09.815 Delete Endurance Group: Not Supported 00:32:09.815 Delete NVM Set: Not Supported 00:32:09.815 Extended LBA Formats Supported: Not Supported 00:32:09.815 Flexible Data Placement Supported: Not Supported 00:32:09.815 00:32:09.815 Controller Memory Buffer Support 00:32:09.815 ================================ 00:32:09.815 Supported: No 00:32:09.815 00:32:09.815 Persistent Memory Region Support 00:32:09.815 ================================ 00:32:09.815 Supported: No 00:32:09.815 00:32:09.815 Admin Command Set Attributes 00:32:09.815 ============================ 00:32:09.815 Security Send/Receive: Not Supported 00:32:09.815 Format NVM: Not Supported 00:32:09.815 Firmware Activate/Download: Not Supported 00:32:09.815 Namespace Management: Not Supported 00:32:09.815 Device Self-Test: Not Supported 00:32:09.815 Directives: Not Supported 00:32:09.815 NVMe-MI: Not Supported 00:32:09.815 Virtualization Management: Not Supported 00:32:09.815 Doorbell Buffer Config: Not Supported 00:32:09.815 Get LBA Status Capability: Not Supported 00:32:09.815 Command & Feature Lockdown Capability: Not Supported 00:32:09.815 Abort Command Limit: 4 00:32:09.815 Async Event Request Limit: 4 00:32:09.815 Number of Firmware Slots: N/A 00:32:09.815 Firmware Slot 1 Read-Only: N/A 00:32:09.815 Firmware Activation Without Reset: N/A 00:32:09.815 Multiple Update Detection Support: N/A 00:32:09.815 Firmware Update Granularity: No Information Provided 00:32:09.815 Per-Namespace SMART Log: Yes 00:32:09.815 Asymmetric Namespace Access Log Page: Supported 00:32:09.815 ANA Transition Time : 10 sec 00:32:09.815 00:32:09.815 Asymmetric Namespace Access Capabilities 00:32:09.815 ANA Optimized State : Supported 00:32:09.815 ANA Non-Optimized State : Supported 00:32:09.815 ANA Inaccessible State : Supported 00:32:09.815 ANA Persistent Loss State : Supported 00:32:09.815 ANA Change State : Supported 00:32:09.815 ANAGRPID is not changed : No 00:32:09.815 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:09.815 00:32:09.815 ANA Group Identifier Maximum : 128 00:32:09.815 Number of ANA Group Identifiers : 128 00:32:09.815 Max Number of Allowed Namespaces : 1024 00:32:09.815 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:09.815 Command Effects Log Page: Supported 00:32:09.815 Get Log Page Extended Data: Supported 00:32:09.815 Telemetry Log Pages: Not Supported 00:32:09.815 Persistent Event Log Pages: Not Supported 00:32:09.815 Supported Log Pages Log Page: May Support 00:32:09.815 Commands Supported & Effects Log Page: Not Supported 00:32:09.815 Feature Identifiers & Effects Log Page:May Support 00:32:09.815 NVMe-MI Commands & Effects Log Page: May Support 00:32:09.815 Data Area 4 for Telemetry Log: Not Supported 00:32:09.815 Error Log Page Entries Supported: 128 00:32:09.815 Keep Alive: Supported 00:32:09.815 Keep Alive Granularity: 1000 ms 00:32:09.815 00:32:09.815 NVM Command Set Attributes 00:32:09.815 ========================== 00:32:09.815 Submission Queue Entry Size 00:32:09.815 Max: 64 00:32:09.815 Min: 64 00:32:09.815 Completion Queue Entry Size 00:32:09.815 Max: 16 00:32:09.815 Min: 16 00:32:09.815 Number of Namespaces: 1024 00:32:09.815 Compare Command: Not Supported 00:32:09.815 Write Uncorrectable Command: Not Supported 00:32:09.815 Dataset Management Command: Supported 00:32:09.815 Write Zeroes Command: Supported 00:32:09.815 Set Features Save Field: Not Supported 00:32:09.815 Reservations: Not Supported 00:32:09.815 Timestamp: Not Supported 00:32:09.815 Copy: Not Supported 00:32:09.815 Volatile Write Cache: Present 00:32:09.815 Atomic Write Unit (Normal): 1 00:32:09.815 Atomic Write Unit (PFail): 1 00:32:09.815 Atomic Compare & Write Unit: 1 00:32:09.815 Fused Compare & Write: Not Supported 00:32:09.815 Scatter-Gather List 00:32:09.815 SGL Command Set: Supported 00:32:09.815 SGL Keyed: Not Supported 00:32:09.815 SGL Bit Bucket Descriptor: Not Supported 00:32:09.815 SGL Metadata Pointer: Not Supported 00:32:09.815 Oversized SGL: Not Supported 00:32:09.815 SGL Metadata Address: Not Supported 00:32:09.815 SGL Offset: Supported 00:32:09.815 Transport SGL Data Block: Not Supported 00:32:09.815 Replay Protected Memory Block: Not Supported 00:32:09.815 00:32:09.815 Firmware Slot Information 00:32:09.815 ========================= 00:32:09.815 Active slot: 0 00:32:09.815 00:32:09.815 Asymmetric Namespace Access 00:32:09.815 =========================== 00:32:09.815 Change Count : 0 00:32:09.815 Number of ANA Group Descriptors : 1 00:32:09.815 ANA Group Descriptor : 0 00:32:09.815 ANA Group ID : 1 00:32:09.816 Number of NSID Values : 1 00:32:09.816 Change Count : 0 00:32:09.816 ANA State : 1 00:32:09.816 Namespace Identifier : 1 00:32:09.816 00:32:09.816 Commands Supported and Effects 00:32:09.816 ============================== 00:32:09.816 Admin Commands 00:32:09.816 -------------- 00:32:09.816 Get Log Page (02h): Supported 00:32:09.816 Identify (06h): Supported 00:32:09.816 Abort (08h): Supported 00:32:09.816 Set Features (09h): Supported 00:32:09.816 Get Features (0Ah): Supported 00:32:09.816 Asynchronous Event Request (0Ch): Supported 00:32:09.816 Keep Alive (18h): Supported 00:32:09.816 I/O Commands 00:32:09.816 ------------ 00:32:09.816 Flush (00h): Supported 00:32:09.816 Write (01h): Supported LBA-Change 00:32:09.816 Read (02h): Supported 00:32:09.816 Write Zeroes (08h): Supported LBA-Change 00:32:09.816 Dataset Management (09h): Supported 00:32:09.816 00:32:09.816 Error Log 00:32:09.816 ========= 00:32:09.816 Entry: 0 00:32:09.816 Error Count: 0x3 00:32:09.816 Submission Queue Id: 0x0 00:32:09.816 Command Id: 0x5 00:32:09.816 Phase Bit: 0 00:32:09.816 Status Code: 0x2 00:32:09.816 Status Code Type: 0x0 00:32:09.816 Do Not Retry: 1 00:32:09.816 Error Location: 0x28 00:32:09.816 LBA: 0x0 00:32:09.816 Namespace: 0x0 00:32:09.816 Vendor Log Page: 0x0 00:32:09.816 ----------- 00:32:09.816 Entry: 1 00:32:09.816 Error Count: 0x2 00:32:09.816 Submission Queue Id: 0x0 00:32:09.816 Command Id: 0x5 00:32:09.816 Phase Bit: 0 00:32:09.816 Status Code: 0x2 00:32:09.816 Status Code Type: 0x0 00:32:09.816 Do Not Retry: 1 00:32:09.816 Error Location: 0x28 00:32:09.816 LBA: 0x0 00:32:09.816 Namespace: 0x0 00:32:09.816 Vendor Log Page: 0x0 00:32:09.816 ----------- 00:32:09.816 Entry: 2 00:32:09.816 Error Count: 0x1 00:32:09.816 Submission Queue Id: 0x0 00:32:09.816 Command Id: 0x4 00:32:09.816 Phase Bit: 0 00:32:09.816 Status Code: 0x2 00:32:09.816 Status Code Type: 0x0 00:32:09.816 Do Not Retry: 1 00:32:09.816 Error Location: 0x28 00:32:09.816 LBA: 0x0 00:32:09.816 Namespace: 0x0 00:32:09.816 Vendor Log Page: 0x0 00:32:09.816 00:32:09.816 Number of Queues 00:32:09.816 ================ 00:32:09.816 Number of I/O Submission Queues: 128 00:32:09.816 Number of I/O Completion Queues: 128 00:32:09.816 00:32:09.816 ZNS Specific Controller Data 00:32:09.816 ============================ 00:32:09.816 Zone Append Size Limit: 0 00:32:09.816 00:32:09.816 00:32:09.816 Active Namespaces 00:32:09.816 ================= 00:32:09.816 get_feature(0x05) failed 00:32:09.816 Namespace ID:1 00:32:09.816 Command Set Identifier: NVM (00h) 00:32:09.816 Deallocate: Supported 00:32:09.816 Deallocated/Unwritten Error: Not Supported 00:32:09.816 Deallocated Read Value: Unknown 00:32:09.816 Deallocate in Write Zeroes: Not Supported 00:32:09.816 Deallocated Guard Field: 0xFFFF 00:32:09.816 Flush: Supported 00:32:09.816 Reservation: Not Supported 00:32:09.816 Namespace Sharing Capabilities: Multiple Controllers 00:32:09.816 Size (in LBAs): 1953525168 (931GiB) 00:32:09.816 Capacity (in LBAs): 1953525168 (931GiB) 00:32:09.816 Utilization (in LBAs): 1953525168 (931GiB) 00:32:09.816 UUID: fd8303f4-ee6b-46fe-a569-476871c9f829 00:32:09.816 Thin Provisioning: Not Supported 00:32:09.816 Per-NS Atomic Units: Yes 00:32:09.816 Atomic Boundary Size (Normal): 0 00:32:09.816 Atomic Boundary Size (PFail): 0 00:32:09.816 Atomic Boundary Offset: 0 00:32:09.816 NGUID/EUI64 Never Reused: No 00:32:09.816 ANA group ID: 1 00:32:09.816 Namespace Write Protected: No 00:32:09.816 Number of LBA Formats: 1 00:32:09.816 Current LBA Format: LBA Format #00 00:32:09.816 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:09.816 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:09.816 rmmod nvme_tcp 00:32:09.816 rmmod nvme_fabrics 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.816 10:52:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.724 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:11.982 10:53:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:12.915 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:32:12.915 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:32:12.916 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:32:12.916 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:32:13.853 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:32:13.853 00:32:13.853 real 0m8.478s 00:32:13.853 user 0m1.663s 00:32:13.853 sys 0m2.887s 00:32:13.853 10:53:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:13.853 10:53:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.853 ************************************ 00:32:13.853 END TEST nvmf_identify_kernel_target 00:32:13.853 ************************************ 00:32:13.853 10:53:02 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:13.853 10:53:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:13.853 10:53:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:13.853 10:53:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.853 ************************************ 00:32:13.854 START TEST nvmf_auth_host 00:32:13.854 ************************************ 00:32:13.854 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:14.113 * Looking for test storage... 00:32:14.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.113 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:15.490 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:15.490 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:15.490 Found net devices under 0000:08:00.0: cvl_0_0 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:15.490 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:15.491 Found net devices under 0000:08:00.1: cvl_0_1 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.491 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.748 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:15.748 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.748 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.748 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.748 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:15.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:32:15.748 00:32:15.748 --- 10.0.0.2 ping statistics --- 00:32:15.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.748 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:15.748 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:32:15.748 00:32:15.748 --- 10.0.0.1 ping statistics --- 00:32:15.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.748 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:32:15.748 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3935103 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3935103 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3935103 ']' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:15.749 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=09fc356e9d2006d2051e23fa9f0b2d16 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0Vz 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 09fc356e9d2006d2051e23fa9f0b2d16 0 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 09fc356e9d2006d2051e23fa9f0b2d16 0 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=09fc356e9d2006d2051e23fa9f0b2d16 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:16.006 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0Vz 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0Vz 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0Vz 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a338af7432760402793d3339c0c7a153e4b847562729774e2aa5e235f9bb26d0 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EjZ 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a338af7432760402793d3339c0c7a153e4b847562729774e2aa5e235f9bb26d0 3 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a338af7432760402793d3339c0c7a153e4b847562729774e2aa5e235f9bb26d0 3 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a338af7432760402793d3339c0c7a153e4b847562729774e2aa5e235f9bb26d0 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EjZ 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EjZ 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.EjZ 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e58d0f01ceb44c85f491bca932e1dad45c28acd9aae918e7 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.68E 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e58d0f01ceb44c85f491bca932e1dad45c28acd9aae918e7 0 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e58d0f01ceb44c85f491bca932e1dad45c28acd9aae918e7 0 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e58d0f01ceb44c85f491bca932e1dad45c28acd9aae918e7 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.68E 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.68E 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.68E 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=734566497b0fba930f2f7ac36774d4156a90c5c43139ce55 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SVo 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 734566497b0fba930f2f7ac36774d4156a90c5c43139ce55 2 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 734566497b0fba930f2f7ac36774d4156a90c5c43139ce55 2 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=734566497b0fba930f2f7ac36774d4156a90c5c43139ce55 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SVo 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SVo 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.SVo 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4808d1a17de3b772dc4384fb28aa30fa 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ntJ 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4808d1a17de3b772dc4384fb28aa30fa 1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4808d1a17de3b772dc4384fb28aa30fa 1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4808d1a17de3b772dc4384fb28aa30fa 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:16.265 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ntJ 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ntJ 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ntJ 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a88fd2d084214a9eb9c89d866aedd7fc 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IgI 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a88fd2d084214a9eb9c89d866aedd7fc 1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a88fd2d084214a9eb9c89d866aedd7fc 1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a88fd2d084214a9eb9c89d866aedd7fc 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IgI 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IgI 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IgI 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d33e0fe2764f26e976bfce4be3c23b9d30e4491155c1bb50 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ojl 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d33e0fe2764f26e976bfce4be3c23b9d30e4491155c1bb50 2 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d33e0fe2764f26e976bfce4be3c23b9d30e4491155c1bb50 2 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d33e0fe2764f26e976bfce4be3c23b9d30e4491155c1bb50 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ojl 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ojl 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ojl 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed5dd1b3fdad85fffa3df55a6fd0b26f 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TSS 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed5dd1b3fdad85fffa3df55a6fd0b26f 0 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed5dd1b3fdad85fffa3df55a6fd0b26f 0 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed5dd1b3fdad85fffa3df55a6fd0b26f 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TSS 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TSS 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TSS 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9081da997e2dc6667a569c4d9955684fc6ad1515837c6a7d883acec992d73fce 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JqP 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9081da997e2dc6667a569c4d9955684fc6ad1515837c6a7d883acec992d73fce 3 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9081da997e2dc6667a569c4d9955684fc6ad1515837c6a7d883acec992d73fce 3 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9081da997e2dc6667a569c4d9955684fc6ad1515837c6a7d883acec992d73fce 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JqP 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JqP 00:32:16.524 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JqP 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3935103 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3935103 ']' 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:16.525 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Vz 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.EjZ ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EjZ 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.68E 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.SVo ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SVo 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ntJ 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IgI ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IgI 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ojl 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TSS ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TSS 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JqP 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.094 10:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:17.095 10:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:18.029 Waiting for block devices as requested 00:32:18.029 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:32:18.029 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:18.029 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:18.287 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:18.287 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:18.287 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:18.287 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:18.546 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:18.546 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:18.546 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:18.546 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:18.805 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:18.805 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:18.805 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:19.063 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:19.063 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:19.063 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:19.630 No valid GPT data, bailing 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:19.630 10:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:32:19.630 00:32:19.630 Discovery Log Number of Records 2, Generation counter 2 00:32:19.630 =====Discovery Log Entry 0====== 00:32:19.630 trtype: tcp 00:32:19.630 adrfam: ipv4 00:32:19.630 subtype: current discovery subsystem 00:32:19.630 treq: not specified, sq flow control disable supported 00:32:19.630 portid: 1 00:32:19.630 trsvcid: 4420 00:32:19.630 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:19.630 traddr: 10.0.0.1 00:32:19.630 eflags: none 00:32:19.630 sectype: none 00:32:19.630 =====Discovery Log Entry 1====== 00:32:19.630 trtype: tcp 00:32:19.630 adrfam: ipv4 00:32:19.630 subtype: nvme subsystem 00:32:19.630 treq: not specified, sq flow control disable supported 00:32:19.630 portid: 1 00:32:19.630 trsvcid: 4420 00:32:19.630 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:19.630 traddr: 10.0.0.1 00:32:19.630 eflags: none 00:32:19.630 sectype: none 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:19.630 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.631 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.890 nvme0n1 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.890 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.149 nvme0n1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.149 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.409 nvme0n1 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.409 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.668 nvme0n1 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.668 10:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.668 nvme0n1 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.668 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.928 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.929 nvme0n1 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.929 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.189 nvme0n1 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.189 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.448 nvme0n1 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.448 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.707 10:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.707 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.966 nvme0n1 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.966 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 nvme0n1 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.226 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.227 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.485 nvme0n1 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.485 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.486 10:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.746 nvme0n1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.746 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.315 nvme0n1 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.315 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.573 nvme0n1 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.573 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.574 10:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.832 nvme0n1 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.832 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.091 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.091 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.091 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.092 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.352 nvme0n1 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.352 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 10:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.921 nvme0n1 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.921 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.179 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.745 nvme0n1 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.745 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.312 nvme0n1 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.312 10:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.572 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.572 10:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.140 nvme0n1 00:32:27.140 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.140 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.141 10:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.730 nvme0n1 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:27.730 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.731 10:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.113 nvme0n1 00:32:29.113 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.113 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.113 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.114 10:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.493 nvme0n1 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.493 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.494 10:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.427 nvme0n1 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.427 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.428 10:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.805 nvme0n1 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.805 10:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.805 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.805 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.805 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:32.805 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.806 10:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.740 nvme0n1 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.740 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.000 nvme0n1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.000 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.261 nvme0n1 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.261 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.519 nvme0n1 00:32:34.519 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.519 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.519 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.519 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.519 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.520 10:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.778 nvme0n1 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.778 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.035 nvme0n1 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.035 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.036 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.294 nvme0n1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.294 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.554 nvme0n1 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.554 10:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.814 nvme0n1 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.814 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.815 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 nvme0n1 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.073 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.074 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 nvme0n1 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 10:53:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.592 nvme0n1 00:32:36.592 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.592 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.592 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.592 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.593 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.593 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.850 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.850 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.850 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.851 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.109 nvme0n1 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.109 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.368 nvme0n1 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.368 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.369 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.628 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.629 10:53:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.889 nvme0n1 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.889 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.149 nvme0n1 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.149 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.408 10:53:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.973 nvme0n1 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.973 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.974 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.543 nvme0n1 00:32:39.543 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.543 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.543 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.544 10:53:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.544 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.480 nvme0n1 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.480 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.481 10:53:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.049 nvme0n1 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.049 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.050 10:53:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 nvme0n1 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.619 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.879 10:53:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.812 nvme0n1 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.812 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:43.070 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.071 10:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 nvme0n1 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.010 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:44.011 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.270 10:53:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.207 nvme0n1 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:45.207 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.467 10:53:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.405 nvme0n1 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.405 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.406 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.406 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.406 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.406 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.666 10:53:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.606 nvme0n1 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.606 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 nvme0n1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.865 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.124 nvme0n1 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.124 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.383 nvme0n1 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.383 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.643 nvme0n1 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.643 10:53:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.643 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 nvme0n1 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.903 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.904 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.163 nvme0n1 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.163 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.164 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.423 nvme0n1 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.423 10:53:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.683 nvme0n1 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.683 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.944 nvme0n1 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.944 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 nvme0n1 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.204 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.463 nvme0n1 00:32:50.463 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.723 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.723 10:53:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.723 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.723 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.723 10:53:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.723 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.724 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.984 nvme0n1 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:50.984 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.985 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.243 nvme0n1 00:32:51.243 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.243 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.243 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.243 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.243 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.503 10:53:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.763 nvme0n1 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.763 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.023 nvme0n1 00:32:52.023 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.023 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.023 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.023 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.023 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.282 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.283 10:53:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.854 nvme0n1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.854 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.424 nvme0n1 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.424 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.684 10:53:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.288 nvme0n1 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:54.288 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.289 10:53:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.884 nvme0n1 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.884 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.885 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.452 nvme0n1 00:32:55.452 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.452 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.452 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.452 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.452 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.710 10:53:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.710 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 10:53:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDlmYzM1NmU5ZDIwMDZkMjA1MWUyM2ZhOWYwYjJkMTZFZ4V/: 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTMzOGFmNzQzMjc2MDQwMjc5M2QzMzM5YzBjN2ExNTNlNGI4NDc1NjI3Mjk3NzRlMmFhNWUyMzVmOWJiMjZkMCO67PE=: 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.710 10:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.644 nvme0n1 00:32:56.644 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.644 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.644 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.644 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.644 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.904 10:53:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.838 nvme0n1 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.838 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDgwOGQxYTE3ZGUzYjc3MmRjNDM4NGZiMjhhYTMwZmFOMjIU: 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg4ZmQyZDA4NDIxNGE5ZWI5Yzg5ZDg2NmFlZGQ3ZmPD4c2c: 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.096 10:53:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.033 nvme0n1 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.033 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMzZTBmZTI3NjRmMjZlOTc2YmZjZTRiZTNjMjNiOWQzMGU0NDkxMTU1YzFiYjUwvJmARQ==: 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWQ1ZGQxYjNmZGFkODVmZmZhM2RmNTVhNmZkMGIyNmaTg6rX: 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.294 10:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 nvme0n1 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA4MWRhOTk3ZTJkYzY2NjdhNTY5YzRkOTk1NTY4NGZjNmFkMTUxNTgzN2M2YTdkODgzYWNlYzk5MmQ3M2ZjZclbDi4=: 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.493 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.494 10:53:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.429 nvme0n1 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.429 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU4ZDBmMDFjZWI0NGM4NWY0OTFiY2E5MzJlMWRhZDQ1YzI4YWNkOWFhZTkxOGU3IsjKNw==: 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzM0NTY2NDk3YjBmYmE5MzBmMmY3YWMzNjc3NGQ0MTU2YTkwYzVjNDMxMzljZTU1NHvd+g==: 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:01.687 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.688 10:53:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.688 request: 00:33:01.688 { 00:33:01.688 "name": "nvme0", 00:33:01.688 "trtype": "tcp", 00:33:01.688 "traddr": "10.0.0.1", 00:33:01.688 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:01.688 "adrfam": "ipv4", 00:33:01.688 "trsvcid": "4420", 00:33:01.688 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:01.688 "method": "bdev_nvme_attach_controller", 00:33:01.688 "req_id": 1 00:33:01.688 } 00:33:01.688 Got JSON-RPC error response 00:33:01.688 response: 00:33:01.688 { 00:33:01.688 "code": -5, 00:33:01.688 "message": "Input/output error" 00:33:01.688 } 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.688 request: 00:33:01.688 { 00:33:01.688 "name": "nvme0", 00:33:01.688 "trtype": "tcp", 00:33:01.688 "traddr": "10.0.0.1", 00:33:01.688 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:01.688 "adrfam": "ipv4", 00:33:01.688 "trsvcid": "4420", 00:33:01.688 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:01.688 "dhchap_key": "key2", 00:33:01.688 "method": "bdev_nvme_attach_controller", 00:33:01.688 "req_id": 1 00:33:01.688 } 00:33:01.688 Got JSON-RPC error response 00:33:01.688 response: 00:33:01.688 { 00:33:01.688 "code": -5, 00:33:01.688 "message": "Input/output error" 00:33:01.688 } 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.688 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.947 request: 00:33:01.947 { 00:33:01.947 "name": "nvme0", 00:33:01.947 "trtype": "tcp", 00:33:01.947 "traddr": "10.0.0.1", 00:33:01.947 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:01.947 "adrfam": "ipv4", 00:33:01.947 "trsvcid": "4420", 00:33:01.947 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:01.947 "dhchap_key": "key1", 00:33:01.947 "dhchap_ctrlr_key": "ckey2", 00:33:01.947 "method": "bdev_nvme_attach_controller", 00:33:01.947 "req_id": 1 00:33:01.947 } 00:33:01.947 Got JSON-RPC error response 00:33:01.947 response: 00:33:01.947 { 00:33:01.947 "code": -5, 00:33:01.947 "message": "Input/output error" 00:33:01.947 } 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.947 rmmod nvme_tcp 00:33:01.947 rmmod nvme_fabrics 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3935103 ']' 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3935103 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3935103 ']' 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3935103 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935103 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935103' 00:33:01.947 killing process with pid 3935103 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3935103 00:33:01.947 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3935103 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.207 10:53:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:04.118 10:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:05.496 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:05.496 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:05.496 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:06.432 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:33:06.432 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0Vz /tmp/spdk.key-null.68E /tmp/spdk.key-sha256.ntJ /tmp/spdk.key-sha384.Ojl /tmp/spdk.key-sha512.JqP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:06.432 10:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:07.369 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:33:07.369 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:07.369 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:33:07.369 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:33:07.369 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:33:07.369 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:33:07.369 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:33:07.369 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:33:07.369 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:33:07.369 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:33:07.369 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:33:07.369 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:33:07.369 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:33:07.369 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:33:07.369 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:33:07.369 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:33:07.369 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:33:07.369 00:33:07.369 real 0m53.364s 00:33:07.369 user 0m51.127s 00:33:07.369 sys 0m5.330s 00:33:07.369 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:07.369 10:53:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.369 ************************************ 00:33:07.369 END TEST nvmf_auth_host 00:33:07.369 ************************************ 00:33:07.369 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:07.369 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:07.369 10:53:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:07.369 10:53:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:07.369 10:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:07.369 ************************************ 00:33:07.369 START TEST nvmf_digest 00:33:07.369 ************************************ 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:07.369 * Looking for test storage... 00:33:07.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.369 10:53:55 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:07.370 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:09.272 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.272 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:09.273 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:09.273 Found net devices under 0000:08:00.0: cvl_0_0 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:09.273 Found net devices under 0000:08:00.1: cvl_0_1 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:09.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:33:09.273 00:33:09.273 --- 10.0.0.2 ping statistics --- 00:33:09.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.273 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:33:09.273 00:33:09.273 --- 10.0.0.1 ping statistics --- 00:33:09.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.273 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.273 ************************************ 00:33:09.273 START TEST nvmf_digest_clean 00:33:09.273 ************************************ 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3943031 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3943031 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3943031 ']' 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:09.273 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.273 [2024-07-23 10:53:57.665368] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:09.273 [2024-07-23 10:53:57.665463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.273 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.273 [2024-07-23 10:53:57.730108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.531 [2024-07-23 10:53:57.816313] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.531 [2024-07-23 10:53:57.816378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.531 [2024-07-23 10:53:57.816394] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.531 [2024-07-23 10:53:57.816407] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.531 [2024-07-23 10:53:57.816418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.531 [2024-07-23 10:53:57.816447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.531 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:09.532 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:09.532 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:09.532 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.532 10:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.532 null0 00:33:09.532 [2024-07-23 10:53:58.027030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.789 [2024-07-23 10:53:58.051221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3943141 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3943141 /var/tmp/bperf.sock 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3943141 ']' 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:09.789 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.789 [2024-07-23 10:53:58.093655] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:09.789 [2024-07-23 10:53:58.093734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943141 ] 00:33:09.789 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.789 [2024-07-23 10:53:58.149146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.789 [2024-07-23 10:53:58.240377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.047 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:10.047 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:10.047 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:10.047 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:10.047 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:10.305 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.305 10:53:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.870 nvme0n1 00:33:10.870 10:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:10.870 10:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:10.870 Running I/O for 2 seconds... 00:33:12.767 00:33:12.767 Latency(us) 00:33:12.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.768 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:12.768 nvme0n1 : 2.01 17760.60 69.38 0.00 0.00 7196.84 4126.34 13981.01 00:33:12.768 =================================================================================================================== 00:33:12.768 Total : 17760.60 69.38 0.00 0.00 7196.84 4126.34 13981.01 00:33:12.768 0 00:33:12.768 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:12.768 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:12.768 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:12.768 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:12.768 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:12.768 | select(.opcode=="crc32c") 00:33:12.768 | "\(.module_name) \(.executed)"' 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3943141 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3943141 ']' 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3943141 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943141 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943141' 00:33:13.333 killing process with pid 3943141 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3943141 00:33:13.333 Received shutdown signal, test time was about 2.000000 seconds 00:33:13.333 00:33:13.333 Latency(us) 00:33:13.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.333 =================================================================================================================== 00:33:13.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3943141 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3943534 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3943534 /var/tmp/bperf.sock 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3943534 ']' 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:13.333 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.334 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:13.334 10:54:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.334 [2024-07-23 10:54:01.764647] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:13.334 [2024-07-23 10:54:01.764746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943534 ] 00:33:13.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:13.334 Zero copy mechanism will not be used. 00:33:13.334 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.334 [2024-07-23 10:54:01.824833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.591 [2024-07-23 10:54:01.912585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.591 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:13.591 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:13.591 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:13.591 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:13.591 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:14.156 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.156 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.414 nvme0n1 00:33:14.414 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:14.414 10:54:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:14.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:14.671 Zero copy mechanism will not be used. 00:33:14.671 Running I/O for 2 seconds... 00:33:16.574 00:33:16.574 Latency(us) 00:33:16.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:16.574 nvme0n1 : 2.00 6124.38 765.55 0.00 0.00 2607.97 676.60 4611.79 00:33:16.574 =================================================================================================================== 00:33:16.574 Total : 6124.38 765.55 0.00 0.00 2607.97 676.60 4611.79 00:33:16.574 0 00:33:16.574 10:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:16.574 10:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:16.574 10:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:16.574 10:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:16.574 | select(.opcode=="crc32c") 00:33:16.574 | "\(.module_name) \(.executed)"' 00:33:16.574 10:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3943534 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3943534 ']' 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3943534 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943534 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943534' 00:33:16.833 killing process with pid 3943534 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3943534 00:33:16.833 Received shutdown signal, test time was about 2.000000 seconds 00:33:16.833 00:33:16.833 Latency(us) 00:33:16.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.833 =================================================================================================================== 00:33:16.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.833 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3943534 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3943881 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3943881 /var/tmp/bperf.sock 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3943881 ']' 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:17.092 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.092 [2024-07-23 10:54:05.529828] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:17.092 [2024-07-23 10:54:05.529927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943881 ] 00:33:17.092 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.092 [2024-07-23 10:54:05.591847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.351 [2024-07-23 10:54:05.679534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.351 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:17.351 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:17.351 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:17.351 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:17.351 10:54:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:17.917 10:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.917 10:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.174 nvme0n1 00:33:18.174 10:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:18.174 10:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.174 Running I/O for 2 seconds... 00:33:20.701 00:33:20.701 Latency(us) 00:33:20.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.701 nvme0n1 : 2.00 18790.48 73.40 0.00 0.00 6803.88 3689.43 18447.17 00:33:20.701 =================================================================================================================== 00:33:20.701 Total : 18790.48 73.40 0.00 0.00 6803.88 3689.43 18447.17 00:33:20.701 0 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:20.701 | select(.opcode=="crc32c") 00:33:20.701 | "\(.module_name) \(.executed)"' 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3943881 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3943881 ']' 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3943881 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943881 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943881' 00:33:20.701 killing process with pid 3943881 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3943881 00:33:20.701 Received shutdown signal, test time was about 2.000000 seconds 00:33:20.701 00:33:20.701 Latency(us) 00:33:20.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.701 =================================================================================================================== 00:33:20.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.701 10:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3943881 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3944545 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3944545 /var/tmp/bperf.sock 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3944545 ']' 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.701 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.701 [2024-07-23 10:54:09.154493] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:20.701 [2024-07-23 10:54:09.154588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944545 ] 00:33:20.701 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.701 Zero copy mechanism will not be used. 00:33:20.701 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.959 [2024-07-23 10:54:09.214529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.959 [2024-07-23 10:54:09.302201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.959 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.959 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.959 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:20.959 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:20.959 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:21.526 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.526 10:54:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.784 nvme0n1 00:33:21.784 10:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:21.785 10:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:22.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.045 Zero copy mechanism will not be used. 00:33:22.045 Running I/O for 2 seconds... 00:33:23.991 00:33:23.991 Latency(us) 00:33:23.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:23.991 nvme0n1 : 2.00 5674.48 709.31 0.00 0.00 2811.57 2135.99 11602.30 00:33:23.991 =================================================================================================================== 00:33:23.991 Total : 5674.48 709.31 0.00 0.00 2811.57 2135.99 11602.30 00:33:23.991 0 00:33:23.991 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:23.991 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:23.991 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:23.991 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:23.991 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:23.991 | select(.opcode=="crc32c") 00:33:23.991 | "\(.module_name) \(.executed)"' 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3944545 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3944545 ']' 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3944545 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3944545 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3944545' 00:33:24.249 killing process with pid 3944545 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3944545 00:33:24.249 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.249 00:33:24.249 Latency(us) 00:33:24.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.249 =================================================================================================================== 00:33:24.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.249 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3944545 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3943031 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3943031 ']' 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3943031 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3943031 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3943031' 00:33:24.507 killing process with pid 3943031 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3943031 00:33:24.507 10:54:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3943031 00:33:24.765 00:33:24.765 real 0m15.399s 00:33:24.765 user 0m31.360s 00:33:24.765 sys 0m3.991s 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:24.765 ************************************ 00:33:24.765 END TEST nvmf_digest_clean 00:33:24.765 ************************************ 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:24.765 ************************************ 00:33:24.765 START TEST nvmf_digest_error 00:33:24.765 ************************************ 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3945130 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3945130 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3945130 ']' 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.765 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.765 [2024-07-23 10:54:13.121201] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:24.765 [2024-07-23 10:54:13.121294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.765 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.765 [2024-07-23 10:54:13.186572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.023 [2024-07-23 10:54:13.272757] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.023 [2024-07-23 10:54:13.272815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.023 [2024-07-23 10:54:13.272832] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.023 [2024-07-23 10:54:13.272847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.023 [2024-07-23 10:54:13.272859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.023 [2024-07-23 10:54:13.272888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.023 [2024-07-23 10:54:13.389624] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.023 null0 00:33:25.023 [2024-07-23 10:54:13.492518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.023 [2024-07-23 10:54:13.516719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3945239 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3945239 /var/tmp/bperf.sock 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3945239 ']' 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:25.023 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.024 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.281 [2024-07-23 10:54:13.569105] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:25.281 [2024-07-23 10:54:13.569197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945239 ] 00:33:25.281 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.281 [2024-07-23 10:54:13.630102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.281 [2024-07-23 10:54:13.717850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.539 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.539 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:25.539 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:25.539 10:54:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.797 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.362 nvme0n1 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:26.362 10:54:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.362 Running I/O for 2 seconds... 00:33:26.362 [2024-07-23 10:54:14.695387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.695446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.712876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.712911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.712931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.726080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.726115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.726134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.743628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.743661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.743681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.756658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.756691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.756709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.770729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.770762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.770780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.785394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.785427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.785445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.800586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.800618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.800637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.815708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.815741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.815759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.829093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.362 [2024-07-23 10:54:14.829130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.362 [2024-07-23 10:54:14.829148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.362 [2024-07-23 10:54:14.841832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.363 [2024-07-23 10:54:14.841864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.363 [2024-07-23 10:54:14.841883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.363 [2024-07-23 10:54:14.857659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.363 [2024-07-23 10:54:14.857691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.363 [2024-07-23 10:54:14.857710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.874956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.874988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.875007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.891113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.891145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.891163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.904417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.904448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.904466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.922380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.922413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.922431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.935349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.935380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.935399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.948890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.948922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.948948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.963801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.963833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.963851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.977982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.978013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.978031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:14.992994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:14.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:14.993044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.620 [2024-07-23 10:54:15.006188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.620 [2024-07-23 10:54:15.006221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.620 [2024-07-23 10:54:15.006239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.021701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.021734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.021752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.036547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.036578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.036597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.049625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.049675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.064231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.064263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.064281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.078984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.079023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.079043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.093151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.093190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.093208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.107298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.107329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.107346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.621 [2024-07-23 10:54:15.121461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.621 [2024-07-23 10:54:15.121499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.621 [2024-07-23 10:54:15.121518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.878 [2024-07-23 10:54:15.136035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.878 [2024-07-23 10:54:15.136068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.878 [2024-07-23 10:54:15.136086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.878 [2024-07-23 10:54:15.150999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.878 [2024-07-23 10:54:15.151031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.878 [2024-07-23 10:54:15.151049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.878 [2024-07-23 10:54:15.165569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.878 [2024-07-23 10:54:15.165600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.878 [2024-07-23 10:54:15.165618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.878 [2024-07-23 10:54:15.178946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.878 [2024-07-23 10:54:15.178977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.878 [2024-07-23 10:54:15.178996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.878 [2024-07-23 10:54:15.194303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.194337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.194356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.208051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.208083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.208102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.224300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.224334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.224352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.238912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.238944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.238961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.252070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.252102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.252120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.267034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.267066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.267084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.281525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.281556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.281573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.295559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.295591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.295609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.309667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.309706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.309724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.323767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.323798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.323831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.337885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.337917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.337935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.352376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.352407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.352425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.879 [2024-07-23 10:54:15.365843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:26.879 [2024-07-23 10:54:15.365875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.879 [2024-07-23 10:54:15.365893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.136 [2024-07-23 10:54:15.382981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.136 [2024-07-23 10:54:15.383012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.136 [2024-07-23 10:54:15.383030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.136 [2024-07-23 10:54:15.396588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.136 [2024-07-23 10:54:15.396619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.411239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.411271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.411289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.425900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.425943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.425962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.440905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.440938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.440957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.455258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.455289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.455307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.469564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.469596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.469615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.484311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.484343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.484362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.497200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.497232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.497250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.512682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.512733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.527789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.527821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.543812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.543845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.543864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.556898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.556938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.556956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.575317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.575350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.575376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.588595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.588627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.588645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.602908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.602947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.602965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.617068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.617099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.617117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.137 [2024-07-23 10:54:15.631940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.137 [2024-07-23 10:54:15.631971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.137 [2024-07-23 10:54:15.631989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.646400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.646441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.646459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.660567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.660598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.676823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.676855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.676873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.690489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.690528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.690546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.705954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.705993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.722832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.722863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.722882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.736637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.736668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.750275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.750306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.763673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.763704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.778269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.778300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.778318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.794042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.794073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.794091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.805917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.805947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.805973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.822587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.822619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.822637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.836729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.836761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.836779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.850793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.850824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.864871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.864902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.864920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.878945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.878977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.879001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.395 [2024-07-23 10:54:15.893039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.395 [2024-07-23 10:54:15.893073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.395 [2024-07-23 10:54:15.893092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.907751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.907782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.907800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.921872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.921903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.921921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.936345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.936376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.936396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.950416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.950448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.950474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.965662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.965693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.965711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.978529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.978560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.978578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:15.992669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:15.992700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:15.992718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.006783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.006815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:16.006834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.020744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.020775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:16.020794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.035005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.035036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:16.035054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.049111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.049142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:16.049160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.065358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.065389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.653 [2024-07-23 10:54:16.065408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.653 [2024-07-23 10:54:16.080464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.653 [2024-07-23 10:54:16.080515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.080534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.654 [2024-07-23 10:54:16.093340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.654 [2024-07-23 10:54:16.093373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.093392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.654 [2024-07-23 10:54:16.109442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.654 [2024-07-23 10:54:16.109474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.109499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.654 [2024-07-23 10:54:16.122847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.654 [2024-07-23 10:54:16.122878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.122896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.654 [2024-07-23 10:54:16.137244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.654 [2024-07-23 10:54:16.137276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.654 [2024-07-23 10:54:16.151927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.654 [2024-07-23 10:54:16.151960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.654 [2024-07-23 10:54:16.151979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.911 [2024-07-23 10:54:16.168038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.168071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.168091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.180428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.180459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.180476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.193960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.193992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.194017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.208208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.208239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.208257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.224761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.224793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.224811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.238953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.238984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.239002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.254102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.254133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.254151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.267359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.267408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.284499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.284548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.299499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.299530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.312707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.312737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.312755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.328111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.328149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.328168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.340938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.340969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.340987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.355565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.355596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.355614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.369581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.369612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.369630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.386756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.386787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.386805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.912 [2024-07-23 10:54:16.398807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:27.912 [2024-07-23 10:54:16.398838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.912 [2024-07-23 10:54:16.398856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.415765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.415796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.415814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.430248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.430279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.430297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.446083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.446114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.446132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.460880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.460913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.460931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.473822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.473853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.473872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.492425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.492455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.492474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.505103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.505134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.505151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.521100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.521131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.521148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.539068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.539099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.552706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.552737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.552756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.569365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.569395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.569414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.582117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.582148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.582173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.598246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.598279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.598298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.611743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.611775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.611793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.626095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.626128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.626146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.640509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.640568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.655063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.655113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.180 [2024-07-23 10:54:16.669106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e87590) 00:33:28.180 [2024-07-23 10:54:16.669139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-23 10:54:16.669157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.442 00:33:28.442 Latency(us) 00:33:28.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.442 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:28.442 nvme0n1 : 2.01 17411.74 68.01 0.00 0.00 7339.33 4223.43 20097.71 00:33:28.442 =================================================================================================================== 00:33:28.442 Total : 17411.74 68.01 0.00 0.00 7339.33 4223.43 20097.71 00:33:28.442 0 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:28.442 | .driver_specific 00:33:28.442 | .nvme_error 00:33:28.442 | .status_code 00:33:28.442 | .command_transient_transport_error' 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3945239 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3945239 ']' 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3945239 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:28.442 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3945239 00:33:28.699 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:28.699 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:28.699 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3945239' 00:33:28.699 killing process with pid 3945239 00:33:28.699 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3945239 00:33:28.699 Received shutdown signal, test time was about 2.000000 seconds 00:33:28.699 00:33:28.699 Latency(us) 00:33:28.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.700 =================================================================================================================== 00:33:28.700 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.700 10:54:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3945239 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3945551 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3945551 /var/tmp/bperf.sock 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3945551 ']' 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:28.700 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.700 [2024-07-23 10:54:17.172331] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:28.700 [2024-07-23 10:54:17.172425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945551 ] 00:33:28.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:28.700 Zero copy mechanism will not be used. 00:33:28.700 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.957 [2024-07-23 10:54:17.232409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.957 [2024-07-23 10:54:17.320157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.957 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:28.957 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:28.957 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:28.957 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.522 10:54:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.781 nvme0n1 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:29.781 10:54:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.781 Zero copy mechanism will not be used. 00:33:29.781 Running I/O for 2 seconds... 00:33:29.781 [2024-07-23 10:54:18.216825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.216883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.216904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.222158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.222193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.227454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.227495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.227515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.232297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.232332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.232351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.237835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.237888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.242993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.243026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.243044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.248497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.248549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.254168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.254201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.254220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.260352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.267985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.268019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.268038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.275297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.275333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.275352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.781 [2024-07-23 10:54:18.279849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:29.781 [2024-07-23 10:54:18.279883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.781 [2024-07-23 10:54:18.279908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.285197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.285231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.285250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.291988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.292025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.292044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.299955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.299988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.300007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.307645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.307680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.307699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.314968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.315004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.315023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.322500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.322534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.322553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.329907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.329943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.329962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.338078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.338113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.338132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.346553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.346593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.353556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.353589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.357643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.357675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.357693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.361603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.361652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.366714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.366746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.366764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.371780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.371811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.371829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.376785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.376816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.376834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.381727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.381759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.381777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.386738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.386769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.386787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.391913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.391944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.391961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.397949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.397981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.397999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.403004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.403035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.403053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.408084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.408114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.408131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.413177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.413209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.413227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.418181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.418212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.418230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.423554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.423586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.423605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.429338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.429388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.434325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.434357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.434381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.439421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.040 [2024-07-23 10:54:18.439451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.040 [2024-07-23 10:54:18.439469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.040 [2024-07-23 10:54:18.444532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.444564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.444582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.449387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.449418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.449436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.454357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.454388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.454406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.459402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.459433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.459450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.464462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.464502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.464521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.469460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.469499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.469517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.474530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.474561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.474579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.479531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.479567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.479585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.484384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.484415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.489419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.489468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.494861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.494894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.494912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.500443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.500476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.500503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.505376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.505409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.505427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.510511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.510542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.510560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.515657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.515688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.515705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.520730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.520762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.520780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.525869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.525902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.525920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.531446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.531486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.531506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.041 [2024-07-23 10:54:18.537147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.041 [2024-07-23 10:54:18.537181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.041 [2024-07-23 10:54:18.537200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.299 [2024-07-23 10:54:18.542342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.299 [2024-07-23 10:54:18.542376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.299 [2024-07-23 10:54:18.542395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.299 [2024-07-23 10:54:18.547960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.299 [2024-07-23 10:54:18.547994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.299 [2024-07-23 10:54:18.548012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.553637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.553670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.553688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.559472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.559513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.559531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.565465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.565506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.565525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.571037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.571082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.576611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.576644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.576662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.582464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.582505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.582524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.587945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.587996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.592921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.592952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.592970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.598563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.598595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.598613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.604478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.604519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.604537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.611047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.611082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.611100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.616930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.616966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.616985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.622690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.622725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.622743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.628408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.628442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.634245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.634296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.640303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.640339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.646172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.646225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.652131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.652166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.652185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.657635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.657668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.657686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.663188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.663222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.663240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.669070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.669103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.669129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.672549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.672581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.300 [2024-07-23 10:54:18.672599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.300 [2024-07-23 10:54:18.678609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.300 [2024-07-23 10:54:18.678643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.678661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.684094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.684127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.684145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.689835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.695779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.695815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.701761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.701795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.701813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.708634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.708668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.708687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.714023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.714056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.714074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.719144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.719182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.724204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.724235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.724252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.729226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.729258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.729275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.734253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.734285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.734303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.739294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.739328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.739346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.744095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.744127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.744145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.749135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.749169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.749187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.754534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.754567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.760166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.760206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.760225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.765173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.765205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.765223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.770270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.770302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.770320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.775303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.775334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.775352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.780231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.780261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.780279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.785288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.785321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.785339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.790686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.790738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.301 [2024-07-23 10:54:18.796464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.301 [2024-07-23 10:54:18.796507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.301 [2024-07-23 10:54:18.796527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.802107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.802142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.802164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.807203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.807236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.807262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.812950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.812983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.813001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.818504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.818538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.818557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.824601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.824635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.824653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.830332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.830365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.830384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.835967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.836002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.836021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.839948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.839981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.840001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.844999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.845033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.845052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.852772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.852807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.852825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.859170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.859211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.859231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.864614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.864647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.864665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.867983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.868015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.868032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.872915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.872947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.872964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.877631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.877663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.877681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.882507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.882538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.882558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.887277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.887308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.892372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.892405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.892422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.897502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.897533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.897566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.903086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.903120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.903139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.908731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.908765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.561 [2024-07-23 10:54:18.908783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.561 [2024-07-23 10:54:18.913801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.561 [2024-07-23 10:54:18.913833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.913851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.918785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.918818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.918836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.923807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.923839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.923857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.928811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.928843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.928861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.933812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.933844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.938443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.938478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.943926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.943968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.949346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.949383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.955243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.955277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.955295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.961082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.961116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.961135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.966616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.966650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.972122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.972155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.972173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.977585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.977620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.977638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.980755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.980786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.980804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.984995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.985028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.985046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.988963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.988997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.993737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.993769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.993786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:18.998542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:18.998573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:18.998592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.003544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.003576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.003594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.008572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.008605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.008623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.013720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.013751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.013768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.018687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.018718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.018736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.023574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.023606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.023624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.028462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.028527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.033307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.033339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.033357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.038303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.038335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.038352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.043485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.043519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.562 [2024-07-23 10:54:19.043536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.562 [2024-07-23 10:54:19.048331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.562 [2024-07-23 10:54:19.048362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.563 [2024-07-23 10:54:19.048380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.563 [2024-07-23 10:54:19.051807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.563 [2024-07-23 10:54:19.051838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.563 [2024-07-23 10:54:19.051856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.563 [2024-07-23 10:54:19.057277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.563 [2024-07-23 10:54:19.057309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.563 [2024-07-23 10:54:19.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.063352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.063387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.063405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.069879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.069914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.069932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.075636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.075678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.075697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.081813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.081847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.081866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.087889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.087922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.087940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.094068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.094113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.094131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.100279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.100313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.100332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.106187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.106222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.106241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.111957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.111992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.112011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.118168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.118202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.118220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.124157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.124189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.124207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.130058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.130091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.130109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.136190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.136231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.136250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.142377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.142410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.142429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.148565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.148600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.148619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.155508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.155540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.155559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.163493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.163526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.163545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.822 [2024-07-23 10:54:19.170657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.822 [2024-07-23 10:54:19.170693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.822 [2024-07-23 10:54:19.170711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.177214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.177249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.177268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.183428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.183462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.189760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.189794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.189813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.195639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.195672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.195690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.200622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.200654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.200673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.205763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.205794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.205812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.211252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.211285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.211303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.216997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.217032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.217051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.222166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.222198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.222217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.227256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.227287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.227305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.232368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.232417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.237422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.237453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.237470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.242541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.242572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.242590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.247678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.247711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.247729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.252631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.252663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.252681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.257543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.257576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.257594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.262490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.262520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.262538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.267401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.267439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.267457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.272410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.272441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.272469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.277516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.277548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.277566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.282564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.282614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.287648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.287679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.287697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.292837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.292867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.292885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.297999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.823 [2024-07-23 10:54:19.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.823 [2024-07-23 10:54:19.298048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.823 [2024-07-23 10:54:19.303102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.824 [2024-07-23 10:54:19.303136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.824 [2024-07-23 10:54:19.303155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.824 [2024-07-23 10:54:19.306623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.824 [2024-07-23 10:54:19.306654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.824 [2024-07-23 10:54:19.306672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.824 [2024-07-23 10:54:19.311597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.824 [2024-07-23 10:54:19.311630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.824 [2024-07-23 10:54:19.311648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.824 [2024-07-23 10:54:19.317376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.824 [2024-07-23 10:54:19.317423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.824 [2024-07-23 10:54:19.317443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.824 [2024-07-23 10:54:19.323198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:30.824 [2024-07-23 10:54:19.323232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.824 [2024-07-23 10:54:19.323251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.329606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.329643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.329662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.335918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.335951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.335970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.341996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.342031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.342050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.348406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.348441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.348459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.354069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.354103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.354122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.358684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.358716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.364999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.365033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.365051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.371245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.371278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.371296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.377362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.377394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.377412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.383714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.383748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.390570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.390603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.396824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.396856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.402774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.402807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.402825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.408923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.408955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.408974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.415804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.415838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.415857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.421588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.421621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.421646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.427057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.427091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.427109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.432780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.432812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.432830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.438428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.438459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.083 [2024-07-23 10:54:19.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.083 [2024-07-23 10:54:19.444977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.083 [2024-07-23 10:54:19.445010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.445028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.452038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.452071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.452090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.460400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.460433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.460451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.468361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.468397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.468416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.475929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.475963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.475981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.482061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.482109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.487398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.487430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.487448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.492597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.492629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.492646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.497420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.497451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.497469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.502627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.502659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.502676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.507782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.507812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.507830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.512936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.512967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.512985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.517951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.517982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.517999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.523194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.523226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.523251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.528255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.528286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.528304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.533220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.533251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.533269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.538278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.538308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.538326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.543450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.543498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.543517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.548407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.548439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.548457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.553620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.553653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.553672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.558832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.558866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.558883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.564002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.564033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.564051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.570097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.570135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.570154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.576152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.576192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.084 [2024-07-23 10:54:19.582316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.084 [2024-07-23 10:54:19.582351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.084 [2024-07-23 10:54:19.582370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.343 [2024-07-23 10:54:19.588472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.343 [2024-07-23 10:54:19.588515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.343 [2024-07-23 10:54:19.588534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.343 [2024-07-23 10:54:19.594939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.343 [2024-07-23 10:54:19.594972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.343 [2024-07-23 10:54:19.594990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.343 [2024-07-23 10:54:19.601618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.343 [2024-07-23 10:54:19.601652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.343 [2024-07-23 10:54:19.601671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.343 [2024-07-23 10:54:19.609737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.343 [2024-07-23 10:54:19.609771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.343 [2024-07-23 10:54:19.609790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.617221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.617255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.617274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.625353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.625386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.625405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.633182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.633216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.633234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.641050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.641083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.641101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.647756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.647789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.647808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.655532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.655566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.655584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.664085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.664119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.664138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.672122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.672156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.672174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.680316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.680351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.680370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.687945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.687999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.693128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.693162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.699669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.699702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.699721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.707174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.707208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.707227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.713063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.713095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.713114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.718065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.718095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.723900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.723931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.723948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.728168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.728198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.728217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.734134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.734167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.734185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.739005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.739039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.739058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.744361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.744400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.744419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.749426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.749458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.749476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.755135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.755187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.760963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.760995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.761014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.766901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.766934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.766953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.772712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.344 [2024-07-23 10:54:19.772764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.344 [2024-07-23 10:54:19.778688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.344 [2024-07-23 10:54:19.778721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.778739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.784499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.784532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.784550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.789961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.789992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.795643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.795677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.795696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.801657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.801689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.801708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.805311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.805342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.805360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.810213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.810246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.810264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.815847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.815880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.815899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.821549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.821581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.826588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.826621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.826639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.832090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.832123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.832142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.837697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.837729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.837754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.345 [2024-07-23 10:54:19.842908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.345 [2024-07-23 10:54:19.842942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.345 [2024-07-23 10:54:19.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.847907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.847941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.847961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.852847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.852882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.857831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.857861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.857879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.862890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.862922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.862941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.868024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.868055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.868073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.872977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.873008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.873027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.877988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.878038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.882863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.882895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.882913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.887730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.887762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.887779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.892682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.892714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.892732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.897545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.897590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.902437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.902470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.902498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.907370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.907402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.907420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.912195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.912226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.912244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.917056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.917104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.605 [2024-07-23 10:54:19.922005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.605 [2024-07-23 10:54:19.922036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.605 [2024-07-23 10:54:19.922060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.926947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.926978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.926996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.931944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.931975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.931993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.936907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.936956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.942024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.942055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.942073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.947112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.947143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.947161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.952206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.952237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.952255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.957265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.957317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.962226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.962258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.962275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.967281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.967318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.967336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.972304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.972335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.972354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.977509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.977542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.982545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.982576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.982594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.987497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.987545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.992512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.992542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.992560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:19.997536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:19.997568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:19.997586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.003019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.003068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.003102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.009126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.009181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.009212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.015104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.015158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.015188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.021165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.021218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.021250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.027245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.027295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.027324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.033862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.033920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.033948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.040514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.040571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.040598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.046833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.046891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.052891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.052930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.052950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.059042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.059081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.059100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.065112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.065147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.065179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.606 [2024-07-23 10:54:20.070646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.606 [2024-07-23 10:54:20.070681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.606 [2024-07-23 10:54:20.070699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.075923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.075956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.075974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.081019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.081051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.081069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.086865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.086900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.086918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.092969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.093002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.093021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.097552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.097585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.097603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.607 [2024-07-23 10:54:20.104683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.607 [2024-07-23 10:54:20.104717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.607 [2024-07-23 10:54:20.104736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.865 [2024-07-23 10:54:20.111906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.865 [2024-07-23 10:54:20.111940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.865 [2024-07-23 10:54:20.111959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.865 [2024-07-23 10:54:20.118205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.865 [2024-07-23 10:54:20.118240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.865 [2024-07-23 10:54:20.118259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.865 [2024-07-23 10:54:20.124404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.865 [2024-07-23 10:54:20.124439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.865 [2024-07-23 10:54:20.124458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.865 [2024-07-23 10:54:20.132169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.865 [2024-07-23 10:54:20.132204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.865 [2024-07-23 10:54:20.132223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.139431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.139467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.139495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.145632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.145665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.145683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.151775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.151809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.151827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.158051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.158102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.161796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.161828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.161846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.166425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.166489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.171868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.171917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.176848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.176879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.176898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.181948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.181980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.181998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.187038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.187070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.187088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.192633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.192666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.192685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.198428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.198461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.198488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.204016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.204050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.204068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.866 [2024-07-23 10:54:20.210637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x241cf70) 00:33:31.866 [2024-07-23 10:54:20.210670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.866 [2024-07-23 10:54:20.210688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.866 00:33:31.866 Latency(us) 00:33:31.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:31.866 nvme0n1 : 2.00 5521.24 690.15 0.00 0.00 2893.37 625.02 8641.04 00:33:31.866 =================================================================================================================== 00:33:31.866 Total : 5521.24 690.15 0.00 0.00 2893.37 625.02 8641.04 00:33:31.866 0 00:33:31.866 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:31.866 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:31.866 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:31.866 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:31.866 | .driver_specific 00:33:31.866 | .nvme_error 00:33:31.866 | .status_code 00:33:31.866 | .command_transient_transport_error' 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3945551 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3945551 ']' 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3945551 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3945551 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3945551' 00:33:32.124 killing process with pid 3945551 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3945551 00:33:32.124 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.124 00:33:32.124 Latency(us) 00:33:32.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.124 =================================================================================================================== 00:33:32.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.124 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3945551 00:33:32.382 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:32.382 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.382 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3945867 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3945867 /var/tmp/bperf.sock 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3945867 ']' 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.383 10:54:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.383 [2024-07-23 10:54:20.759188] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:32.383 [2024-07-23 10:54:20.759270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945867 ] 00:33:32.383 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.383 [2024-07-23 10:54:20.815784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.640 [2024-07-23 10:54:20.907187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.640 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.640 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.640 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.640 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.898 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.463 nvme0n1 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.463 10:54:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.463 Running I/O for 2 seconds... 00:33:33.463 [2024-07-23 10:54:21.836275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f1ca0 00:33:33.463 [2024-07-23 10:54:21.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.463 [2024-07-23 10:54:21.837821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:33.463 [2024-07-23 10:54:21.850332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190de038 00:33:33.463 [2024-07-23 10:54:21.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.851832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.865776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fb480 00:33:33.464 [2024-07-23 10:54:21.867244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.867277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.879790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190df118 00:33:33.464 [2024-07-23 10:54:21.881216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.881249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.893821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fb8b8 00:33:33.464 [2024-07-23 10:54:21.895139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.895171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.907348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190dfdc0 00:33:33.464 [2024-07-23 10:54:21.908473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.908537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.920969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fc560 00:33:33.464 [2024-07-23 10:54:21.922093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.922124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.934261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190eaef0 00:33:33.464 [2024-07-23 10:54:21.935532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.935563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.947893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f7100 00:33:33.464 [2024-07-23 10:54:21.949146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:33.464 [2024-07-23 10:54:21.961497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5220 00:33:33.464 [2024-07-23 10:54:21.962812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.464 [2024-07-23 10:54:21.962843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:21.976954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f4b08 00:33:33.722 [2024-07-23 10:54:21.978796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:21.978828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:21.990535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fb480 00:33:33.722 [2024-07-23 10:54:21.992197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:21.992228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.004149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f1868 00:33:33.722 [2024-07-23 10:54:22.005793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.005837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.017754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e1f80 00:33:33.722 [2024-07-23 10:54:22.019395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.019440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.031410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fe2e8 00:33:33.722 [2024-07-23 10:54:22.033026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.033071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.044353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e84c0 00:33:33.722 [2024-07-23 10:54:22.045251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.045295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.057965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5a90 00:33:33.722 [2024-07-23 10:54:22.058851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.071207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e27f0 00:33:33.722 [2024-07-23 10:54:22.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.086243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ff3c8 00:33:33.722 [2024-07-23 10:54:22.087852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.087898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.100238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ed920 00:33:33.722 [2024-07-23 10:54:22.101823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.101879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.114641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ecc78 00:33:33.722 [2024-07-23 10:54:22.116418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.116451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.128191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f6890 00:33:33.722 [2024-07-23 10:54:22.129817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.129862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.142159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f1430 00:33:33.722 [2024-07-23 10:54:22.144101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.156335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fbcf0 00:33:33.722 [2024-07-23 10:54:22.157523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.157582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.169457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e7818 00:33:33.722 [2024-07-23 10:54:22.170524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.170554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.722 [2024-07-23 10:54:22.183046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5a90 00:33:33.722 [2024-07-23 10:54:22.184096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.722 [2024-07-23 10:54:22.184152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:33.723 [2024-07-23 10:54:22.197250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fb8b8 00:33:33.723 [2024-07-23 10:54:22.198417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.723 [2024-07-23 10:54:22.198473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:33.723 [2024-07-23 10:54:22.212198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e8088 00:33:33.723 [2024-07-23 10:54:22.214329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.723 [2024-07-23 10:54:22.214369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.224740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f7da8 00:33:33.981 [2024-07-23 10:54:22.226127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.226158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.238606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0bc0 00:33:33.981 [2024-07-23 10:54:22.239925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.239968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.252673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f7538 00:33:33.981 [2024-07-23 10:54:22.253976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.254007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.266533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fda78 00:33:33.981 [2024-07-23 10:54:22.267806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.267836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.280344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f6890 00:33:33.981 [2024-07-23 10:54:22.281603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.281635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.294177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f5be8 00:33:33.981 [2024-07-23 10:54:22.295432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.295463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.308259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e3498 00:33:33.981 [2024-07-23 10:54:22.309465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.309502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.321823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e49b0 00:33:33.981 [2024-07-23 10:54:22.323009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.323039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.335631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190feb58 00:33:33.981 [2024-07-23 10:54:22.336802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.336832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.351219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5658 00:33:33.981 [2024-07-23 10:54:22.353127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.981 [2024-07-23 10:54:22.353157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.981 [2024-07-23 10:54:22.365298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ea680 00:33:33.981 [2024-07-23 10:54:22.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.367229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.379705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190eb760 00:33:33.982 [2024-07-23 10:54:22.381830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.381873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.390635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e4578 00:33:33.982 [2024-07-23 10:54:22.391993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.392039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.405335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f2d80 00:33:33.982 [2024-07-23 10:54:22.406538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.406569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.419133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ed4e8 00:33:33.982 [2024-07-23 10:54:22.420307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.420337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.432916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f9f68 00:33:33.982 [2024-07-23 10:54:22.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.434104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.446746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f8e88 00:33:33.982 [2024-07-23 10:54:22.447886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.447915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.460560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e0ea0 00:33:33.982 [2024-07-23 10:54:22.461662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.461692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:33.982 [2024-07-23 10:54:22.474370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f7970 00:33:33.982 [2024-07-23 10:54:22.475449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.982 [2024-07-23 10:54:22.475486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.488536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ddc00 00:33:34.240 [2024-07-23 10:54:22.489612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.489642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.502424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e99d8 00:33:34.240 [2024-07-23 10:54:22.503468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.503507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.516269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190de038 00:33:34.240 [2024-07-23 10:54:22.517289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.517319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.530113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0bc0 00:33:34.240 [2024-07-23 10:54:22.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.531130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.543927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5658 00:33:34.240 [2024-07-23 10:54:22.544918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.544949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.557789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f6020 00:33:34.240 [2024-07-23 10:54:22.558737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.558767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.571614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e2c28 00:33:34.240 [2024-07-23 10:54:22.572527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.572565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.585399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e3498 00:33:34.240 [2024-07-23 10:54:22.586300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.586330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.599218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ef270 00:33:34.240 [2024-07-23 10:54:22.600099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.600129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.615662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0350 00:33:34.240 [2024-07-23 10:54:22.617249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.617281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.629544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ef6a8 00:33:34.240 [2024-07-23 10:54:22.631097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.631128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.643339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190eb328 00:33:34.240 [2024-07-23 10:54:22.644875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.644905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.657169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e1f80 00:33:34.240 [2024-07-23 10:54:22.658688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.658718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.671003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190df988 00:33:34.240 [2024-07-23 10:54:22.672498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.672528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.686606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e2c28 00:33:34.240 [2024-07-23 10:54:22.688828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.688858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.697899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e1b48 00:33:34.240 [2024-07-23 10:54:22.699383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.699420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.711804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e73e0 00:33:34.240 [2024-07-23 10:54:22.713241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.713272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.725681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f4b08 00:33:34.240 [2024-07-23 10:54:22.727094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.240 [2024-07-23 10:54:22.727125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:34.240 [2024-07-23 10:54:22.741070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0bc0 00:33:34.498 [2024-07-23 10:54:22.742557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.755124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ed920 00:33:34.498 [2024-07-23 10:54:22.756520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.756551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.770180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ee190 00:33:34.498 [2024-07-23 10:54:22.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.772321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.781506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190dece0 00:33:34.498 [2024-07-23 10:54:22.782843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.782874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.795392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f7538 00:33:34.498 [2024-07-23 10:54:22.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.796735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.810748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f9b30 00:33:34.498 [2024-07-23 10:54:22.812042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.824567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f81e0 00:33:34.498 [2024-07-23 10:54:22.825835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.825866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.838395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fb048 00:33:34.498 [2024-07-23 10:54:22.839646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.839675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.852243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fe720 00:33:34.498 [2024-07-23 10:54:22.853489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.853520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.866085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0ff8 00:33:34.498 [2024-07-23 10:54:22.867283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.867314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.879895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ecc78 00:33:34.498 [2024-07-23 10:54:22.881076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.498 [2024-07-23 10:54:22.881106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:34.498 [2024-07-23 10:54:22.893701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190de8a8 00:33:34.499 [2024-07-23 10:54:22.894860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.894889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.907527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e3d08 00:33:34.499 [2024-07-23 10:54:22.908671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.908701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.921312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5a90 00:33:34.499 [2024-07-23 10:54:22.922422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.922452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.935148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e73e0 00:33:34.499 [2024-07-23 10:54:22.936240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.936270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.948982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e7c50 00:33:34.499 [2024-07-23 10:54:22.950046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.950081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.962801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ec408 00:33:34.499 [2024-07-23 10:54:22.963836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.963866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.976622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190eee38 00:33:34.499 [2024-07-23 10:54:22.977642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.977672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:34.499 [2024-07-23 10:54:22.990425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e88f8 00:33:34.499 [2024-07-23 10:54:22.991417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.499 [2024-07-23 10:54:22.991447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.004604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e4140 00:33:34.757 [2024-07-23 10:54:23.005573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.005604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.018446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190df550 00:33:34.757 [2024-07-23 10:54:23.019385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.019416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.032393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ea248 00:33:34.757 [2024-07-23 10:54:23.033256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.033286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.046361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fef90 00:33:34.757 [2024-07-23 10:54:23.047341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.047371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.060223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5a90 00:33:34.757 [2024-07-23 10:54:23.061178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.061216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.074116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ea248 00:33:34.757 [2024-07-23 10:54:23.075048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.075079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.087954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e7c50 00:33:34.757 [2024-07-23 10:54:23.088857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.088888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.101789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e1f80 00:33:34.757 [2024-07-23 10:54:23.102674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.102705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:34.757 [2024-07-23 10:54:23.115650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f3a28 00:33:34.757 [2024-07-23 10:54:23.116517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.757 [2024-07-23 10:54:23.116547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.129422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f6020 00:33:34.758 [2024-07-23 10:54:23.130305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.130336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.145106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f3a28 00:33:34.758 [2024-07-23 10:54:23.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.146754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.159187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e27f0 00:33:34.758 [2024-07-23 10:54:23.160786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.160817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.173093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e4578 00:33:34.758 [2024-07-23 10:54:23.174658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.174689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.187088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e49b0 00:33:34.758 [2024-07-23 10:54:23.188667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.188699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.201038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190ea680 00:33:34.758 [2024-07-23 10:54:23.202555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.202586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.214949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5658 00:33:34.758 [2024-07-23 10:54:23.216430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.228948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fef90 00:33:34.758 [2024-07-23 10:54:23.230412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.230444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.244307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190feb58 00:33:34.758 [2024-07-23 10:54:23.245783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.758 [2024-07-23 10:54:23.245822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:34.758 [2024-07-23 10:54:23.258250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190df550 00:33:34.758 [2024-07-23 10:54:23.259763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.259794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.273954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190de470 00:33:35.015 [2024-07-23 10:54:23.276119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.276149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.287869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190fe720 00:33:35.015 [2024-07-23 10:54:23.290047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.290079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.302087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190f0bc0 00:33:35.015 [2024-07-23 10:54:23.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.304277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.312827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190dece0 00:33:35.015 [2024-07-23 10:54:23.314107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.314137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.328196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e38d0 00:33:35.015 [2024-07-23 10:54:23.329491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.329556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.342161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190de470 00:33:35.015 [2024-07-23 10:54:23.343397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.343428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.356120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e38d0 00:33:35.015 [2024-07-23 10:54:23.357294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.357324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.370077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e5a90 00:33:35.015 [2024-07-23 10:54:23.371342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.371373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.383799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.384836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.384868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.398643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.398864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.398914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.413387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.413623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.413672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.428119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.428341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.428384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.442916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.443135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.443184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.457688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.457909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.457959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.472439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.472664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.472712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.487138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.487358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.487390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.501905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.502128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.502176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.015 [2024-07-23 10:54:23.516851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.015 [2024-07-23 10:54:23.517082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.015 [2024-07-23 10:54:23.517132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.531917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.532188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.546712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.546936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.546987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.561473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.561725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.561774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.576252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.576474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.576531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.591027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.591267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.605796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.606019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.620591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.620859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.635361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.635596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.635646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.650175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.650396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.650444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.664954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.665184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.679715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.679951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.679981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.694460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.694689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.694739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.709202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.709421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.709468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.723962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.724181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.724211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.738707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.738926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.738973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.753463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.272 [2024-07-23 10:54:23.768215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.272 [2024-07-23 10:54:23.768432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.272 [2024-07-23 10:54:23.768492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.529 [2024-07-23 10:54:23.783436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.529 [2024-07-23 10:54:23.783675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.529 [2024-07-23 10:54:23.783707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.529 [2024-07-23 10:54:23.798172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.529 [2024-07-23 10:54:23.798389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.529 [2024-07-23 10:54:23.798437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.529 [2024-07-23 10:54:23.812933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edc7f0) with pdu=0x2000190e6738 00:33:35.529 [2024-07-23 10:54:23.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.529 [2024-07-23 10:54:23.813222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:35.529 00:33:35.529 Latency(us) 00:33:35.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.529 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.529 nvme0n1 : 2.01 18016.16 70.38 0.00 0.00 7087.14 2900.57 19320.98 00:33:35.529 =================================================================================================================== 00:33:35.529 Total : 18016.16 70.38 0.00 0.00 7087.14 2900.57 19320.98 00:33:35.529 0 00:33:35.529 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.529 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.529 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.529 10:54:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.529 | .driver_specific 00:33:35.529 | .nvme_error 00:33:35.529 | .status_code 00:33:35.529 | .command_transient_transport_error' 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3945867 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3945867 ']' 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3945867 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3945867 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3945867' 00:33:35.786 killing process with pid 3945867 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3945867 00:33:35.786 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.786 00:33:35.786 Latency(us) 00:33:35.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.786 =================================================================================================================== 00:33:35.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.786 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3945867 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3946237 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3946237 /var/tmp/bperf.sock 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3946237 ']' 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.044 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.044 [2024-07-23 10:54:24.371239] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:36.044 [2024-07-23 10:54:24.371332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946237 ] 00:33:36.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.044 Zero copy mechanism will not be used. 00:33:36.044 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.044 [2024-07-23 10:54:24.431377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.044 [2024-07-23 10:54:24.519018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.301 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:36.301 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:36.301 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.301 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.559 10:54:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.817 nvme0n1 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.817 10:54:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.075 Zero copy mechanism will not be used. 00:33:37.075 Running I/O for 2 seconds... 00:33:37.075 [2024-07-23 10:54:25.436777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.437143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.437182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.442318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.442660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.442693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.448832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.449160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.449193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.454712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.455044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.455076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.460729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.461057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.461088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.467226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.467574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.467606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.473772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.474103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.474135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.479320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.479659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.479690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.484704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.485036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.485067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.490162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.490496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.490527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.495507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.495835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.075 [2024-07-23 10:54:25.495866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.075 [2024-07-23 10:54:25.501505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.075 [2024-07-23 10:54:25.501836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.501867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.509119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.509448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.509478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.515180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.515514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.520860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.521190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.521220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.526298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.526636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.526666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.532607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.532932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.532963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.539111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.539441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.539489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.545641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.545971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.546003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.551014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.551327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.551357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.557692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.558022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.558053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.564769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.565103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.565134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.076 [2024-07-23 10:54:25.571532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.076 [2024-07-23 10:54:25.571866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.076 [2024-07-23 10:54:25.571895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.334 [2024-07-23 10:54:25.578092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.334 [2024-07-23 10:54:25.578405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.334 [2024-07-23 10:54:25.578437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.334 [2024-07-23 10:54:25.584313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.334 [2024-07-23 10:54:25.584645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.334 [2024-07-23 10:54:25.584677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.334 [2024-07-23 10:54:25.589663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.334 [2024-07-23 10:54:25.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.334 [2024-07-23 10:54:25.590025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.334 [2024-07-23 10:54:25.595169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.334 [2024-07-23 10:54:25.595516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.334 [2024-07-23 10:54:25.595547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.334 [2024-07-23 10:54:25.600974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.601303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.601334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.606380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.606714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.606745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.612390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.612733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.612763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.618902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.619254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.619286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.625406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.625742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.625773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.630981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.631306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.631337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.636319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.636657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.636688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.641676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.642004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.642035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.647175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.647507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.647537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.653152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.653488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.659553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.659883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.664899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.665227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.665257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.670187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.670520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.670550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.675566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.675916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.675947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.682552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.682884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.682914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.688338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.688671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.688701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.694209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.694545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.694584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.699855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.700186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.700217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.705156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.705490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.705520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.710507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.710841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.710871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.715807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.716145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.716176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.721083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.721418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.721449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.726407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.726743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.726774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.731944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.732307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.737930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.738246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.738276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.744882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.745209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.745240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.751520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.335 [2024-07-23 10:54:25.751858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.335 [2024-07-23 10:54:25.751889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.335 [2024-07-23 10:54:25.758064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.758407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.758437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.764573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.764909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.770446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.770791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.770821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.775840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.776176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.776206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.781712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.782051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.782082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.787211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.787559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.787590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.793454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.793777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.793807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.800168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.800554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.800585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.807444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.807798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.807828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.814763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.815176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.821845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.822142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.822172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.827054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.827354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.827384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.336 [2024-07-23 10:54:25.832037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.336 [2024-07-23 10:54:25.832341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.336 [2024-07-23 10:54:25.832370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.837244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.837555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.837586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.842375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.842684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.847341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.847649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.847688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.852316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.852621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.852651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.857323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.857634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.857663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.862313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.862619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.862648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.867263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.867570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.867599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.872222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.872527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.872557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.877226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.877543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.882274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.882590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.882619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.887267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.887575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.887605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.892704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.893002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.893031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.898381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.898690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.898720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.904060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.904360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.904390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.909835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.910135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.910165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.915598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.915898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.915928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.921249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.921561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.921590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.927171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.927472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.927509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.932878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.933178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.933208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.938848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.939146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.939189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.944951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.945253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.945283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.595 [2024-07-23 10:54:25.950659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.595 [2024-07-23 10:54:25.950963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.595 [2024-07-23 10:54:25.950992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.956769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.957094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.957125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.962955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.963257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.963287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.969209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.969597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.969628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.975477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.975783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.975813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.981719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.982017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.982047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.987884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.988266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.988296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:25.994229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:25.994637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:25.994668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.001591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.001910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.001940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.008601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.009001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.009031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.015824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.016143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.016173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.022120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.022439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.022469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.028426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.028795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.028825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.034090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.034391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.034422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.040235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.040545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.047136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.047557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.054140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.054442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.054472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.060555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.067339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.067743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.067774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.074625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.074924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.074953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.080033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.080334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.080364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.085467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.085774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.085804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.090848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.091146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.091175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.596 [2024-07-23 10:54:26.095982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.596 [2024-07-23 10:54:26.096281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.596 [2024-07-23 10:54:26.096312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.854 [2024-07-23 10:54:26.102125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.854 [2024-07-23 10:54:26.102426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.854 [2024-07-23 10:54:26.102466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.854 [2024-07-23 10:54:26.108525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.854 [2024-07-23 10:54:26.108825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.108855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.115598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.115901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.115931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.120888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.121189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.121219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.125964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.126259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.130935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.131231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.135897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.136199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.136229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.142326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.142726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.148809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.149178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.149208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.154978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.155294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.155324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.161111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.161503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.161533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.167502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.167854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.173688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.173987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.174016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.179831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.180129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.180158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.185917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.186212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.186243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.192052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.192348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.192379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.198178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.198487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.198517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.204316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.204622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.204653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.210450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.210754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.210784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.216554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.216853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.222695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.222994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.223023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.228826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.229153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.234957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.235258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.235288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.240039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.240367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.245320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.245628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.245659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.250556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.250853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.250883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.255650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.255947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.255986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.261411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.261823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.855 [2024-07-23 10:54:26.261855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.855 [2024-07-23 10:54:26.268726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.855 [2024-07-23 10:54:26.269026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.269055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.274311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.274618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.274648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.279908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.280206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.280235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.285191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.285501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.285531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.290555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.290861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.290891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.295622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.295917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.295947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.301533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.301830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.301859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.307645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.307941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.307971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.313397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.313702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.313732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.319489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.319822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.325672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.326015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.326045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.332619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.332918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.339729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.340029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.346610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.346936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.856 [2024-07-23 10:54:26.352688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:37.856 [2024-07-23 10:54:26.352987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.856 [2024-07-23 10:54:26.353016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.114 [2024-07-23 10:54:26.358605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.114 [2024-07-23 10:54:26.358906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.114 [2024-07-23 10:54:26.358945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.114 [2024-07-23 10:54:26.364490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.114 [2024-07-23 10:54:26.364790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.114 [2024-07-23 10:54:26.364820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.114 [2024-07-23 10:54:26.370457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.114 [2024-07-23 10:54:26.370765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.114 [2024-07-23 10:54:26.370795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.376765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.377063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.377093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.382628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.382924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.382953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.389197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.389503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.389533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.394921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.395220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.395251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.400545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.400842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.400872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.406270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.406575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.406605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.411339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.411654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.411685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.417041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.417338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.417368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.422068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.422365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.422396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.427066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.427393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.432114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.432440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.437061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.437358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.437386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.443035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.443331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.443361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.448923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.449224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.449254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.455181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.455489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.461087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.461383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.461413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.466704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.467002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.467032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.472454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.472767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.472799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.478559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.478858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.478889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.484318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.484625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.484655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.490192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.490529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.496103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.496402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.496432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.501808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.502106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.502136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.507537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.507833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.507871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.513374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.513702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.513733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.519065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.519373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.115 [2024-07-23 10:54:26.519402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.115 [2024-07-23 10:54:26.524129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.115 [2024-07-23 10:54:26.524428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.524458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.529181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.529488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.529518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.534240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.534544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.534574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.539329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.539638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.539668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.544414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.544717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.544747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.549571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.549872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.549903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.554989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.555302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.555332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.560565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.560871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.560901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.565616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.565919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.565948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.570581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.570881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.570911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.575538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.575864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.580520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.580829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.580859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.585475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.585791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.585821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.590470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.590777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.590806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.595548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.595848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.595878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.600632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.600939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.600970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.605616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.605918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.610694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.610995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.611025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.116 [2024-07-23 10:54:26.615898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.116 [2024-07-23 10:54:26.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.116 [2024-07-23 10:54:26.616227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.374 [2024-07-23 10:54:26.621052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.374 [2024-07-23 10:54:26.621353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.374 [2024-07-23 10:54:26.621383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.374 [2024-07-23 10:54:26.626113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.374 [2024-07-23 10:54:26.626413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.374 [2024-07-23 10:54:26.626443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.374 [2024-07-23 10:54:26.631673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.374 [2024-07-23 10:54:26.631970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.374 [2024-07-23 10:54:26.631999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.374 [2024-07-23 10:54:26.637855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.374 [2024-07-23 10:54:26.638196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.638226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.643762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.644068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.644109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.648818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.649119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.649148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.653874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.654172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.654201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.658925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.659226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.659257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.664880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.665183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.665213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.671740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.672039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.672069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.678022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.678339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.678369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.684286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.684590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.684621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.690536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.690835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.690865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.696904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.697212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.697242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.704000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.704299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.704329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.710681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.711069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.717942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.718241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.718272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.724649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.724953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.724984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.731678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.732077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.732107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.738983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.739335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.739365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.746680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.746982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.747011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.752885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.753184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.753213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.757904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.758203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.758233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.762890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.763192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.763222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.767881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.768187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.768217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.773426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.773732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.773762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.375 [2024-07-23 10:54:26.778757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.375 [2024-07-23 10:54:26.779052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.375 [2024-07-23 10:54:26.779083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.783844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.784173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.788791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.789089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.789120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.793818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.794147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.798797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.799096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.799135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.803787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.804085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.804115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.808751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.809049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.809079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.813720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.814021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.814051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.818737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.819035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.819064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.823769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.824068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.824097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.829313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.829619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.829650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.835361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.835716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.835745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.841577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.841951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.841980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.847819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.848122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.848152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.854082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.854384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.854413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.860299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.860607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.860637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.866505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.866802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.866832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.376 [2024-07-23 10:54:26.872768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.376 [2024-07-23 10:54:26.873130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.376 [2024-07-23 10:54:26.873161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.878960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.879353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.879384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.885326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.885713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.885744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.890783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.891081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.891111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.896142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.896440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.896489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.902334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.907293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.907596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.907625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.912267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.912574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.912604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.918060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.918359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.918389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.924433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.924743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.924774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.931377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.634 [2024-07-23 10:54:26.931682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.634 [2024-07-23 10:54:26.931711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.634 [2024-07-23 10:54:26.938792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.939207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.939237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.946281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.946594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.952725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.953106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.953136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.960089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.960463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.960501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.966267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.966577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.966606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.972541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.972841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.972870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.978758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.979062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.979092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.985000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.985301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.985330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.992277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.992614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:26.998416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:26.998714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:26.998744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.004576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.004884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.004913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.010934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.011237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.018067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.018474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.018513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.025442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.025816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.025846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.032949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.033336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.033366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.039951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.040249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.040279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.046185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.046492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.046522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.051828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.052127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.052157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.057904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.058241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.064072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.064372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.064412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.070397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.070711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.076609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.076922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.076953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.082900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.083256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.083286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.089240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.089545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.089576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.095419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.095729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.095759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.102108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.102507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.635 [2024-07-23 10:54:27.102538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.635 [2024-07-23 10:54:27.108487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.635 [2024-07-23 10:54:27.108785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.108815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.636 [2024-07-23 10:54:27.113734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.636 [2024-07-23 10:54:27.114038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.114067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.636 [2024-07-23 10:54:27.118952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.636 [2024-07-23 10:54:27.119262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.119291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.636 [2024-07-23 10:54:27.124343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.636 [2024-07-23 10:54:27.124651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.636 [2024-07-23 10:54:27.130655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.636 [2024-07-23 10:54:27.130955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.130984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.636 [2024-07-23 10:54:27.135691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.636 [2024-07-23 10:54:27.135991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.636 [2024-07-23 10:54:27.136021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.140767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.141098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.145762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.146062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.150760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.151060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.151090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.155805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.156136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.160754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.161057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.161086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.165707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.166008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.166038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.170629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.170926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.170956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.175635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.175932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.943 [2024-07-23 10:54:27.175961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.943 [2024-07-23 10:54:27.180595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.943 [2024-07-23 10:54:27.180894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.180924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.185690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.185988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.191581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.191879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.191910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.196861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.197188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.201930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.202227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.202257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.206937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.207274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.211946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.212242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.212273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.216997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.217296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.217326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.221991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.222290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.222318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.227031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.227328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.232034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.232332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.232362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.237085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.237381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.237410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.242084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.242383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.242412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.247692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.247992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.248022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.253843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.254200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.260008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.260306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.260336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.266148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.266454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.266491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.273066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.273463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.273502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.279677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.279975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.280007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.285004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.285300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.285330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.290211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.290519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.290550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.296214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.296554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.296584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.302163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.302459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.302504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.307183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.307477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.307514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.312172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.312470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.317256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.317562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.317591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.323134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.944 [2024-07-23 10:54:27.323433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.944 [2024-07-23 10:54:27.323463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.944 [2024-07-23 10:54:27.328187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.328527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.333148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.333447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.333476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.338174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.338471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.338508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.343168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.343465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.343502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.348509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.348818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.348848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.354160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.354459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.359129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.359437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.359467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.364132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.364459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.369096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.369403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.369432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.374068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.374366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.374395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.379055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.379353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.379381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.384561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.384861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.384892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.390718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.391016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.391046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.397890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.398250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.398280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.403989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.404289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.404319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.409248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.409556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.409585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.414605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.414902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.414931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.420922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.421218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.421248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.945 [2024-07-23 10:54:27.427352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1edcb30) with pdu=0x2000190fef90 00:33:38.945 [2024-07-23 10:54:27.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.945 [2024-07-23 10:54:27.427805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.945 00:33:38.945 Latency(us) 00:33:38.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.945 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:38.945 nvme0n1 : 2.00 5323.41 665.43 0.00 0.00 2997.49 2366.58 8252.68 00:33:38.945 =================================================================================================================== 00:33:38.945 Total : 5323.41 665.43 0.00 0.00 2997.49 2366.58 8252.68 00:33:38.945 0 00:33:39.202 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:39.202 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:39.202 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.202 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:39.202 | .driver_specific 00:33:39.202 | .nvme_error 00:33:39.202 | .status_code 00:33:39.202 | .command_transient_transport_error' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3946237 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3946237 ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3946237 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3946237 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3946237' 00:33:39.460 killing process with pid 3946237 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3946237 00:33:39.460 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.460 00:33:39.460 Latency(us) 00:33:39.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.460 =================================================================================================================== 00:33:39.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3946237 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3945130 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3945130 ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3945130 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3945130 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3945130' 00:33:39.460 killing process with pid 3945130 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3945130 00:33:39.460 10:54:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3945130 00:33:39.719 00:33:39.719 real 0m15.050s 00:33:39.719 user 0m30.496s 00:33:39.719 sys 0m4.071s 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.719 ************************************ 00:33:39.719 END TEST nvmf_digest_error 00:33:39.719 ************************************ 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:39.719 rmmod nvme_tcp 00:33:39.719 rmmod nvme_fabrics 00:33:39.719 rmmod nvme_keyring 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3945130 ']' 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3945130 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3945130 ']' 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3945130 00:33:39.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3945130) - No such process 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3945130 is not found' 00:33:39.719 Process with pid 3945130 is not found 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.719 10:54:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.253 10:54:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:42.253 00:33:42.253 real 0m34.506s 00:33:42.253 user 1m2.564s 00:33:42.253 sys 0m9.396s 00:33:42.253 10:54:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:42.253 10:54:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:42.253 ************************************ 00:33:42.253 END TEST nvmf_digest 00:33:42.253 ************************************ 00:33:42.253 10:54:30 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:42.253 10:54:30 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:42.253 10:54:30 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:42.253 10:54:30 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:42.253 10:54:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:42.253 10:54:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:42.253 10:54:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.253 ************************************ 00:33:42.253 START TEST nvmf_bdevperf 00:33:42.253 ************************************ 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:42.253 * Looking for test storage... 00:33:42.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.253 10:54:30 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:42.254 10:54:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:43.633 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:43.633 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:43.633 Found net devices under 0000:08:00.0: cvl_0_0 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:43.633 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:43.634 Found net devices under 0000:08:00.1: cvl_0_1 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.634 10:54:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:43.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:33:43.634 00:33:43.634 --- 10.0.0.2 ping statistics --- 00:33:43.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.634 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:33:43.634 00:33:43.634 --- 10.0.0.1 ping statistics --- 00:33:43.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.634 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3948057 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3948057 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3948057 ']' 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:43.634 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:43.892 [2024-07-23 10:54:32.158781] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:43.892 [2024-07-23 10:54:32.158888] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.893 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.893 [2024-07-23 10:54:32.224127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:43.893 [2024-07-23 10:54:32.312094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.893 [2024-07-23 10:54:32.312155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.893 [2024-07-23 10:54:32.312180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.893 [2024-07-23 10:54:32.312200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.893 [2024-07-23 10:54:32.312220] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.893 [2024-07-23 10:54:32.312309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.893 [2024-07-23 10:54:32.312364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.893 [2024-07-23 10:54:32.312370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 [2024-07-23 10:54:32.436145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 Malloc0 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:44.151 [2024-07-23 10:54:32.498746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:44.151 { 00:33:44.151 "params": { 00:33:44.151 "name": "Nvme$subsystem", 00:33:44.151 "trtype": "$TEST_TRANSPORT", 00:33:44.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:44.151 "adrfam": "ipv4", 00:33:44.151 "trsvcid": "$NVMF_PORT", 00:33:44.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:44.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:44.151 "hdgst": ${hdgst:-false}, 00:33:44.151 "ddgst": ${ddgst:-false} 00:33:44.151 }, 00:33:44.151 "method": "bdev_nvme_attach_controller" 00:33:44.151 } 00:33:44.151 EOF 00:33:44.151 )") 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:44.151 10:54:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:44.151 "params": { 00:33:44.151 "name": "Nvme1", 00:33:44.151 "trtype": "tcp", 00:33:44.151 "traddr": "10.0.0.2", 00:33:44.151 "adrfam": "ipv4", 00:33:44.151 "trsvcid": "4420", 00:33:44.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:44.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:44.151 "hdgst": false, 00:33:44.151 "ddgst": false 00:33:44.151 }, 00:33:44.151 "method": "bdev_nvme_attach_controller" 00:33:44.151 }' 00:33:44.151 [2024-07-23 10:54:32.547461] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:44.151 [2024-07-23 10:54:32.547562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948112 ] 00:33:44.151 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.151 [2024-07-23 10:54:32.609189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.409 [2024-07-23 10:54:32.700755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.667 Running I/O for 1 seconds... 00:33:45.619 00:33:45.619 Latency(us) 00:33:45.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.619 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:45.619 Verification LBA range: start 0x0 length 0x4000 00:33:45.619 Nvme1n1 : 1.00 7711.41 30.12 0.00 0.00 16508.97 3034.07 16796.63 00:33:45.619 =================================================================================================================== 00:33:45.619 Total : 7711.41 30.12 0.00 0.00 16508.97 3034.07 16796.63 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3948217 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:45.902 { 00:33:45.902 "params": { 00:33:45.902 "name": "Nvme$subsystem", 00:33:45.902 "trtype": "$TEST_TRANSPORT", 00:33:45.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:45.902 "adrfam": "ipv4", 00:33:45.902 "trsvcid": "$NVMF_PORT", 00:33:45.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:45.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:45.902 "hdgst": ${hdgst:-false}, 00:33:45.902 "ddgst": ${ddgst:-false} 00:33:45.902 }, 00:33:45.902 "method": "bdev_nvme_attach_controller" 00:33:45.902 } 00:33:45.902 EOF 00:33:45.902 )") 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:45.902 10:54:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:45.902 "params": { 00:33:45.902 "name": "Nvme1", 00:33:45.902 "trtype": "tcp", 00:33:45.902 "traddr": "10.0.0.2", 00:33:45.902 "adrfam": "ipv4", 00:33:45.902 "trsvcid": "4420", 00:33:45.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:45.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:45.902 "hdgst": false, 00:33:45.902 "ddgst": false 00:33:45.902 }, 00:33:45.902 "method": "bdev_nvme_attach_controller" 00:33:45.902 }' 00:33:45.902 [2024-07-23 10:54:34.213656] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:45.902 [2024-07-23 10:54:34.213748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948217 ] 00:33:45.902 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.902 [2024-07-23 10:54:34.275289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.902 [2024-07-23 10:54:34.365618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.164 Running I/O for 15 seconds... 00:33:48.694 10:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3948057 00:33:48.694 10:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:48.694 [2024-07-23 10:54:37.179019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.179977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.179993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.694 [2024-07-23 10:54:37.180009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.180027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.694 [2024-07-23 10:54:37.180042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.180059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.694 [2024-07-23 10:54:37.180074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.694 [2024-07-23 10:54:37.180091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.694 [2024-07-23 10:54:37.180107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.695 [2024-07-23 10:54:37.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.180991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.695 [2024-07-23 10:54:37.181413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.695 [2024-07-23 10:54:37.181428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.696 [2024-07-23 10:54:37.181460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.181982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.181997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.696 [2024-07-23 10:54:37.182709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.696 [2024-07-23 10:54:37.182724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.182971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.182986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.697 [2024-07-23 10:54:37.183284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1750aa0 is same with the state(5) to be set 00:33:48.697 [2024-07-23 10:54:37.183318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:48.697 [2024-07-23 10:54:37.183331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:48.697 [2024-07-23 10:54:37.183344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25928 len:8 PRP1 0x0 PRP2 0x0 00:33:48.697 [2024-07-23 10:54:37.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.697 [2024-07-23 10:54:37.183418] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1750aa0 was disconnected and freed. reset controller. 00:33:48.697 [2024-07-23 10:54:37.187688] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.697 [2024-07-23 10:54:37.187763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.697 [2024-07-23 10:54:37.188489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.697 [2024-07-23 10:54:37.188523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.697 [2024-07-23 10:54:37.188541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.697 [2024-07-23 10:54:37.188806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.697 [2024-07-23 10:54:37.189075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.697 [2024-07-23 10:54:37.189097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.697 [2024-07-23 10:54:37.189114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.697 [2024-07-23 10:54:37.193238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.957 [2024-07-23 10:54:37.202453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.957 [2024-07-23 10:54:37.202981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.957 [2024-07-23 10:54:37.203024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.957 [2024-07-23 10:54:37.203049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.957 [2024-07-23 10:54:37.203321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.957 [2024-07-23 10:54:37.203608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.957 [2024-07-23 10:54:37.203634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.957 [2024-07-23 10:54:37.203650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.957 [2024-07-23 10:54:37.207699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.957 [2024-07-23 10:54:37.216812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.957 [2024-07-23 10:54:37.217307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.957 [2024-07-23 10:54:37.217348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.957 [2024-07-23 10:54:37.217368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.957 [2024-07-23 10:54:37.217651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.957 [2024-07-23 10:54:37.217920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.957 [2024-07-23 10:54:37.217943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.957 [2024-07-23 10:54:37.217958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.957 [2024-07-23 10:54:37.222018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.957 [2024-07-23 10:54:37.231377] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.957 [2024-07-23 10:54:37.231930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.957 [2024-07-23 10:54:37.231986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.957 [2024-07-23 10:54:37.232005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.957 [2024-07-23 10:54:37.232275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.957 [2024-07-23 10:54:37.232557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.957 [2024-07-23 10:54:37.232581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.957 [2024-07-23 10:54:37.232596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.957 [2024-07-23 10:54:37.236671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.957 [2024-07-23 10:54:37.245753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.957 [2024-07-23 10:54:37.246324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.957 [2024-07-23 10:54:37.246365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.957 [2024-07-23 10:54:37.246384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.957 [2024-07-23 10:54:37.246667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.957 [2024-07-23 10:54:37.246948] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.957 [2024-07-23 10:54:37.246976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.957 [2024-07-23 10:54:37.246992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.957 [2024-07-23 10:54:37.251044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.957 [2024-07-23 10:54:37.260187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.957 [2024-07-23 10:54:37.260722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.957 [2024-07-23 10:54:37.260814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.957 [2024-07-23 10:54:37.260833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.957 [2024-07-23 10:54:37.261104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.957 [2024-07-23 10:54:37.261373] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.957 [2024-07-23 10:54:37.261396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.261412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.265526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.274685] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.275180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.275235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.275254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.275537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.275806] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.275829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.275845] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.279926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.289050] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.289549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.289579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.289597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.289861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.290128] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.290151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.290166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.294239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.303639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.304200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.304241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.304260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.304544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.304814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.304836] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.304851] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.308960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.318126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.318666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.318720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.318739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.319009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.319283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.319306] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.319322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.323416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.332648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.333210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.333251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.333270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.333554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.333830] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.333852] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.333868] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.337958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.347103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.347687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.347729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.347748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.348024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.348299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.348322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.348337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.352411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.361586] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.362134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.362175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.362194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.362464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.362745] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.362768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.362783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.366857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.376049] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.376508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.376549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.376568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.376838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.377107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.377130] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.377145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.381215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.390570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.391029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.391081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.391099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.391370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.391648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.391671] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.391693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.958 [2024-07-23 10:54:37.395792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.958 [2024-07-23 10:54:37.404954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.958 [2024-07-23 10:54:37.405450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.958 [2024-07-23 10:54:37.405506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.958 [2024-07-23 10:54:37.405569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.958 [2024-07-23 10:54:37.405835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.958 [2024-07-23 10:54:37.406103] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.958 [2024-07-23 10:54:37.406125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.958 [2024-07-23 10:54:37.406140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.959 [2024-07-23 10:54:37.410194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.959 [2024-07-23 10:54:37.419336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.959 [2024-07-23 10:54:37.419892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.959 [2024-07-23 10:54:37.419934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.959 [2024-07-23 10:54:37.419953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.959 [2024-07-23 10:54:37.420224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.959 [2024-07-23 10:54:37.420511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.959 [2024-07-23 10:54:37.420535] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.959 [2024-07-23 10:54:37.420550] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.959 [2024-07-23 10:54:37.424643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.959 [2024-07-23 10:54:37.433804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.959 [2024-07-23 10:54:37.434296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.959 [2024-07-23 10:54:37.434337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.959 [2024-07-23 10:54:37.434356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.959 [2024-07-23 10:54:37.434640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.959 [2024-07-23 10:54:37.434916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.959 [2024-07-23 10:54:37.434938] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.959 [2024-07-23 10:54:37.434953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.959 [2024-07-23 10:54:37.439012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.959 [2024-07-23 10:54:37.448366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.959 [2024-07-23 10:54:37.448873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.959 [2024-07-23 10:54:37.448918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:48.959 [2024-07-23 10:54:37.448937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:48.959 [2024-07-23 10:54:37.449208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:48.959 [2024-07-23 10:54:37.449494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.959 [2024-07-23 10:54:37.449518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.959 [2024-07-23 10:54:37.449533] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.959 [2024-07-23 10:54:37.453606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.218 [2024-07-23 10:54:37.462862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.218 [2024-07-23 10:54:37.463367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.218 [2024-07-23 10:54:37.463397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.218 [2024-07-23 10:54:37.463415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.218 [2024-07-23 10:54:37.463698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.218 [2024-07-23 10:54:37.463978] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.218 [2024-07-23 10:54:37.464008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.218 [2024-07-23 10:54:37.464026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.218 [2024-07-23 10:54:37.468110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.218 [2024-07-23 10:54:37.477249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.218 [2024-07-23 10:54:37.477810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.218 [2024-07-23 10:54:37.477852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.218 [2024-07-23 10:54:37.477871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.218 [2024-07-23 10:54:37.478141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.218 [2024-07-23 10:54:37.478416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.218 [2024-07-23 10:54:37.478438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.218 [2024-07-23 10:54:37.478454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.218 [2024-07-23 10:54:37.482546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.218 [2024-07-23 10:54:37.491743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.218 [2024-07-23 10:54:37.492227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.218 [2024-07-23 10:54:37.492283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.218 [2024-07-23 10:54:37.492302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.492586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.492863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.492886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.492901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.497003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.506152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.506631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.506687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.506706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.506976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.507251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.507274] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.507289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.511355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.520681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.521127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.521170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.521188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.521454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.521742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.521774] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.521790] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.525868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.535223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.535738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.535783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.535801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.536064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.536332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.536355] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.536372] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.540446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.549607] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.550114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.550162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.550178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.550455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.550731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.550755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.550770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.554838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.563971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.564425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.564486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.564505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.564769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.565037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.565060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.565075] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.569127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.578451] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.579015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.579057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.579076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.579346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.579627] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.579651] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.579667] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.583740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.592853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.593238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.593271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.593294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.593571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.593839] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.593862] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.593877] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.597956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.607293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.607750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.607818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.607837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.608107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.608377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.608400] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.608415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.612489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.621927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.622438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.622497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.622516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.219 [2024-07-23 10:54:37.622780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.219 [2024-07-23 10:54:37.623048] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.219 [2024-07-23 10:54:37.623071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.219 [2024-07-23 10:54:37.623086] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.219 [2024-07-23 10:54:37.627169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.219 [2024-07-23 10:54:37.636339] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.219 [2024-07-23 10:54:37.636868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.219 [2024-07-23 10:54:37.636910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.219 [2024-07-23 10:54:37.636928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.637199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.637467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.637513] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.637530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.641614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.220 [2024-07-23 10:54:37.650942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.220 [2024-07-23 10:54:37.651508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 10:54:37.651549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.220 [2024-07-23 10:54:37.651568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.651838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.652107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.652130] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.652146] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.656259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.220 [2024-07-23 10:54:37.665439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.220 [2024-07-23 10:54:37.666036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 10:54:37.666078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.220 [2024-07-23 10:54:37.666097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.666367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.666649] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.666672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.666687] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.670777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.220 [2024-07-23 10:54:37.679994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.220 [2024-07-23 10:54:37.680558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 10:54:37.680600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.220 [2024-07-23 10:54:37.680619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.680890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.681159] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.681182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.681197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.685285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.220 [2024-07-23 10:54:37.694451] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.220 [2024-07-23 10:54:37.695017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 10:54:37.695060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.220 [2024-07-23 10:54:37.695078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.695348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.695630] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.695654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.695669] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.699750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.220 [2024-07-23 10:54:37.708920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.220 [2024-07-23 10:54:37.709546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.220 [2024-07-23 10:54:37.709587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.220 [2024-07-23 10:54:37.709606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.220 [2024-07-23 10:54:37.709876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.220 [2024-07-23 10:54:37.710150] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.220 [2024-07-23 10:54:37.710173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.220 [2024-07-23 10:54:37.710188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.220 [2024-07-23 10:54:37.714300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.479 [2024-07-23 10:54:37.723551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.479 [2024-07-23 10:54:37.724035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.479 [2024-07-23 10:54:37.724096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.479 [2024-07-23 10:54:37.724116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.479 [2024-07-23 10:54:37.724386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.479 [2024-07-23 10:54:37.724672] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.479 [2024-07-23 10:54:37.724704] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.479 [2024-07-23 10:54:37.724722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.479 [2024-07-23 10:54:37.728826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.479 [2024-07-23 10:54:37.737970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.479 [2024-07-23 10:54:37.738505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.479 [2024-07-23 10:54:37.738536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.479 [2024-07-23 10:54:37.738554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.479 [2024-07-23 10:54:37.738826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.479 [2024-07-23 10:54:37.739094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.479 [2024-07-23 10:54:37.739116] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.479 [2024-07-23 10:54:37.739131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.479 [2024-07-23 10:54:37.743226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.479 [2024-07-23 10:54:37.752396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.479 [2024-07-23 10:54:37.752868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.479 [2024-07-23 10:54:37.752898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.479 [2024-07-23 10:54:37.752916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.753179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.753447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.753470] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.753496] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.757611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.766977] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.767525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.767566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.767585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.767856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.768125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.768147] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.768163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.772252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.781412] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.782022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.782063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.782082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.782353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.782636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.782660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.782682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.786868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.795868] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.796370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.796421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.796439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.796712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.796980] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.797002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.797017] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.801179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.810441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.810980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.811036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.811055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.811325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.811613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.811636] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.811652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.815804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.825062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.825536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.825578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.825597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.825867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.826135] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.826158] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.826174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.830267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.839716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.840297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.840340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.840359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.840642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.840916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.840939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.840955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.845037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.854231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.854750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.854808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.854827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.855110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.855384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.855407] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.855422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.859522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.868730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.869268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.869343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.869362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.869646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.869928] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.869951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.869966] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.874065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.883309] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.883775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.883807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.883825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.884095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.480 [2024-07-23 10:54:37.884363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.480 [2024-07-23 10:54:37.884386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.480 [2024-07-23 10:54:37.884401] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.480 [2024-07-23 10:54:37.888491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.480 [2024-07-23 10:54:37.897876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.480 [2024-07-23 10:54:37.898397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.480 [2024-07-23 10:54:37.898447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.480 [2024-07-23 10:54:37.898463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.480 [2024-07-23 10:54:37.898736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.899004] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.899027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.899042] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.903148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.481 [2024-07-23 10:54:37.912459] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.481 [2024-07-23 10:54:37.912945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.481 [2024-07-23 10:54:37.912995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.481 [2024-07-23 10:54:37.913013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.481 [2024-07-23 10:54:37.913276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.913561] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.913584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.913599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.917688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.481 [2024-07-23 10:54:37.927063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.481 [2024-07-23 10:54:37.927590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.481 [2024-07-23 10:54:37.927631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.481 [2024-07-23 10:54:37.927650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.481 [2024-07-23 10:54:37.927920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.928201] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.928223] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.928245] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.932362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.481 [2024-07-23 10:54:37.941557] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.481 [2024-07-23 10:54:37.942021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.481 [2024-07-23 10:54:37.942062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.481 [2024-07-23 10:54:37.942082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.481 [2024-07-23 10:54:37.942352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.942638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.942662] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.942677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.946746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.481 [2024-07-23 10:54:37.956153] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.481 [2024-07-23 10:54:37.956636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.481 [2024-07-23 10:54:37.956692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.481 [2024-07-23 10:54:37.956711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.481 [2024-07-23 10:54:37.956981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.957250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.957273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.957288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.961381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.481 [2024-07-23 10:54:37.970556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.481 [2024-07-23 10:54:37.971118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.481 [2024-07-23 10:54:37.971171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.481 [2024-07-23 10:54:37.971190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.481 [2024-07-23 10:54:37.971460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.481 [2024-07-23 10:54:37.971741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.481 [2024-07-23 10:54:37.971764] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.481 [2024-07-23 10:54:37.971780] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.481 [2024-07-23 10:54:37.975882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.740 [2024-07-23 10:54:37.985151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.740 [2024-07-23 10:54:37.985688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.740 [2024-07-23 10:54:37.985750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.740 [2024-07-23 10:54:37.985772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.740 [2024-07-23 10:54:37.986062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.740 [2024-07-23 10:54:37.986338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.740 [2024-07-23 10:54:37.986360] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.740 [2024-07-23 10:54:37.986376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.740 [2024-07-23 10:54:37.990455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.740 [2024-07-23 10:54:37.999648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.740 [2024-07-23 10:54:38.000167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.740 [2024-07-23 10:54:38.000218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.740 [2024-07-23 10:54:38.000235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.740 [2024-07-23 10:54:38.000510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.740 [2024-07-23 10:54:38.000779] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.740 [2024-07-23 10:54:38.000801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.740 [2024-07-23 10:54:38.000816] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.740 [2024-07-23 10:54:38.004914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.740 [2024-07-23 10:54:38.014137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.740 [2024-07-23 10:54:38.014676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.740 [2024-07-23 10:54:38.014718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.740 [2024-07-23 10:54:38.014737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.740 [2024-07-23 10:54:38.015007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.740 [2024-07-23 10:54:38.015276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.740 [2024-07-23 10:54:38.015299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.740 [2024-07-23 10:54:38.015315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.740 [2024-07-23 10:54:38.019399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.740 [2024-07-23 10:54:38.028562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.740 [2024-07-23 10:54:38.029176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.740 [2024-07-23 10:54:38.029207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.740 [2024-07-23 10:54:38.029224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.740 [2024-07-23 10:54:38.029505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.740 [2024-07-23 10:54:38.029782] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.740 [2024-07-23 10:54:38.029805] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.740 [2024-07-23 10:54:38.029820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.740 [2024-07-23 10:54:38.033941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.740 [2024-07-23 10:54:38.043150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.740 [2024-07-23 10:54:38.043653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.740 [2024-07-23 10:54:38.043710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.740 [2024-07-23 10:54:38.043729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.740 [2024-07-23 10:54:38.043999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.740 [2024-07-23 10:54:38.044267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.740 [2024-07-23 10:54:38.044290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.740 [2024-07-23 10:54:38.044305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.048406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.057560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.058079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.058120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.058139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.058410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.058693] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.058717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.058732] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.062872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.072124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.072652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.072693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.072712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.072995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.073265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.073288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.073303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.077436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.086610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.087178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.087219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.087238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.087523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.087798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.087821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.087837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.091975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.101158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.101708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.101757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.101774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.102038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.102306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.102329] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.102344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.106451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.115597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.116124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.116172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.116189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.116452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.116729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.116752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.116767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.120850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.130031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.130462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.130512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.130538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.130809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.131081] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.131103] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.131118] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.135235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.144411] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.144918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.144960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.144979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.145249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.145531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.145554] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.145570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.149646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.158796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.159314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.159366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.159383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.159658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.159926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.159948] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.159964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.164046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.173238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.173756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.173797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.173816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.174087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.174356] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.174384] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.174400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.178464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.741 [2024-07-23 10:54:38.187793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.741 [2024-07-23 10:54:38.188203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.741 [2024-07-23 10:54:38.188252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.741 [2024-07-23 10:54:38.188270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.741 [2024-07-23 10:54:38.188546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.741 [2024-07-23 10:54:38.188814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.741 [2024-07-23 10:54:38.188837] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.741 [2024-07-23 10:54:38.188852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.741 [2024-07-23 10:54:38.192901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.742 [2024-07-23 10:54:38.202232] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.742 [2024-07-23 10:54:38.202742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.742 [2024-07-23 10:54:38.202785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.742 [2024-07-23 10:54:38.202804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.742 [2024-07-23 10:54:38.203081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.742 [2024-07-23 10:54:38.203349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.742 [2024-07-23 10:54:38.203372] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.742 [2024-07-23 10:54:38.203387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.742 [2024-07-23 10:54:38.207455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.742 [2024-07-23 10:54:38.216716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.742 [2024-07-23 10:54:38.217215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.742 [2024-07-23 10:54:38.217256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.742 [2024-07-23 10:54:38.217275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.742 [2024-07-23 10:54:38.217556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.742 [2024-07-23 10:54:38.217836] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.742 [2024-07-23 10:54:38.217859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.742 [2024-07-23 10:54:38.217875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.742 [2024-07-23 10:54:38.221952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:49.742 [2024-07-23 10:54:38.231124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:49.742 [2024-07-23 10:54:38.231565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.742 [2024-07-23 10:54:38.231617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:49.742 [2024-07-23 10:54:38.231635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:49.742 [2024-07-23 10:54:38.231899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:49.742 [2024-07-23 10:54:38.232166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:49.742 [2024-07-23 10:54:38.232189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:49.742 [2024-07-23 10:54:38.232205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:49.742 [2024-07-23 10:54:38.236262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.245721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.246234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.246285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.246302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.246591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.246869] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.246893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.246908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.250958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.260092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.260587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.260642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.260660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.260929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.261203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.261225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.261240] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.265278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.274595] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.275090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.275140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.275157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.275427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.275703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.275726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.275741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.279776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.289092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.289627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.289686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.289705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.289975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.290251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.290273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.290289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.294349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.303693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.304148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.304189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.304208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.304488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.304759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.304781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.304796] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.308849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.318258] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.318752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.318794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.318813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.319083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.319353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.319375] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.319397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.323474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.001 [2024-07-23 10:54:38.332724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.001 [2024-07-23 10:54:38.333281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.001 [2024-07-23 10:54:38.333323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.001 [2024-07-23 10:54:38.333341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.001 [2024-07-23 10:54:38.333625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.001 [2024-07-23 10:54:38.333894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.001 [2024-07-23 10:54:38.333917] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.001 [2024-07-23 10:54:38.333932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.001 [2024-07-23 10:54:38.338009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.347216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.347585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.347616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.347633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.347898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.348166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.348188] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.348204] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.352289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.361705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.362160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.362213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.362231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.362503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.362771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.362794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.362809] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.366906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.376118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.376681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.376723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.376742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.377012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.377282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.377304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.377319] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.381427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.390561] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.391396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.391433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.391452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.391735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.392006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.392029] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.392045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.396102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.405057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.405561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.405592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.405609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.405879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.406147] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.406170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.406185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.410315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.419617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.420057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.420112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.420130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.420399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.420678] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.420701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.420716] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.424888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.434152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.434661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.434709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.434726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.434990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.435258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.435280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.435296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.439424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.448614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.449024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.449054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.449071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.449334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.449611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.449635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.449650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.453708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.463141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.463633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.463675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.463694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.463965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.464233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.464256] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.464287] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.468389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.477623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.478174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.002 [2024-07-23 10:54:38.478230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.002 [2024-07-23 10:54:38.478249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.002 [2024-07-23 10:54:38.478533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.002 [2024-07-23 10:54:38.478803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.002 [2024-07-23 10:54:38.478826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.002 [2024-07-23 10:54:38.478841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.002 [2024-07-23 10:54:38.482927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.002 [2024-07-23 10:54:38.492159] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.002 [2024-07-23 10:54:38.492791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.003 [2024-07-23 10:54:38.492833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.003 [2024-07-23 10:54:38.492852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.003 [2024-07-23 10:54:38.493122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.003 [2024-07-23 10:54:38.493391] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.003 [2024-07-23 10:54:38.493414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.003 [2024-07-23 10:54:38.493429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.003 [2024-07-23 10:54:38.497522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.261 [2024-07-23 10:54:38.506607] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.261 [2024-07-23 10:54:38.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.261 [2024-07-23 10:54:38.507139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.261 [2024-07-23 10:54:38.507159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.261 [2024-07-23 10:54:38.507461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.261 [2024-07-23 10:54:38.507743] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.261 [2024-07-23 10:54:38.507767] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.261 [2024-07-23 10:54:38.507783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.261 [2024-07-23 10:54:38.511893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.261 [2024-07-23 10:54:38.521127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.261 [2024-07-23 10:54:38.521706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.261 [2024-07-23 10:54:38.521759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.261 [2024-07-23 10:54:38.521779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.261 [2024-07-23 10:54:38.522049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.261 [2024-07-23 10:54:38.522317] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.261 [2024-07-23 10:54:38.522340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.261 [2024-07-23 10:54:38.522355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.261 [2024-07-23 10:54:38.526417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.261 [2024-07-23 10:54:38.535550] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.261 [2024-07-23 10:54:38.536026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.261 [2024-07-23 10:54:38.536067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.261 [2024-07-23 10:54:38.536087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.261 [2024-07-23 10:54:38.536357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.261 [2024-07-23 10:54:38.536643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.536667] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.536683] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.540726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.550134] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.550569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.550634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.550654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.550924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.551194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.551216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.551231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.555282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.564660] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.565084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.565124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.565143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.565413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.565701] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.565726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.565741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.569796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.579177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.579786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.579827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.579846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.580117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.580386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.580408] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.580424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.584539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.593682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.594216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.594271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.594290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.594574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.594849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.594872] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.594887] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.598965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.608108] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.608667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.608708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.608727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.608998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.609266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.609289] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.609304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.613381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.622471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.622952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.623009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.623027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.623290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.623568] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.623591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.623607] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.627651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.637000] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.637411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.637457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.637475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.637756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.638030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.638052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.638067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.642119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.651507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.652020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.652060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.652080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.652356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.652638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.652661] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.652677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.656733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.666058] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.666563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.666605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.666630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.666902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.667171] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.667193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.667209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.262 [2024-07-23 10:54:38.671257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.262 [2024-07-23 10:54:38.680558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.262 [2024-07-23 10:54:38.681063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.262 [2024-07-23 10:54:38.681101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.262 [2024-07-23 10:54:38.681131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.262 [2024-07-23 10:54:38.681395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.262 [2024-07-23 10:54:38.681683] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.262 [2024-07-23 10:54:38.681708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.262 [2024-07-23 10:54:38.681724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.685782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.263 [2024-07-23 10:54:38.695100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.263 [2024-07-23 10:54:38.695502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.263 [2024-07-23 10:54:38.695533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.263 [2024-07-23 10:54:38.695551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.263 [2024-07-23 10:54:38.695815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.263 [2024-07-23 10:54:38.696091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.263 [2024-07-23 10:54:38.696113] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.263 [2024-07-23 10:54:38.696128] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.700178] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.263 [2024-07-23 10:54:38.709490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.263 [2024-07-23 10:54:38.709907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.263 [2024-07-23 10:54:38.709941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.263 [2024-07-23 10:54:38.709971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.263 [2024-07-23 10:54:38.710235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.263 [2024-07-23 10:54:38.710512] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.263 [2024-07-23 10:54:38.710540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.263 [2024-07-23 10:54:38.710556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.714601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.263 [2024-07-23 10:54:38.723925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.263 [2024-07-23 10:54:38.724379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.263 [2024-07-23 10:54:38.724428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.263 [2024-07-23 10:54:38.724446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.263 [2024-07-23 10:54:38.724721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.263 [2024-07-23 10:54:38.724990] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.263 [2024-07-23 10:54:38.725012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.263 [2024-07-23 10:54:38.725027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.729073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.263 [2024-07-23 10:54:38.738434] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.263 [2024-07-23 10:54:38.738918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.263 [2024-07-23 10:54:38.738947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.263 [2024-07-23 10:54:38.738964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.263 [2024-07-23 10:54:38.739227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.263 [2024-07-23 10:54:38.739511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.263 [2024-07-23 10:54:38.739534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.263 [2024-07-23 10:54:38.739549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.743622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.263 [2024-07-23 10:54:38.752986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.263 [2024-07-23 10:54:38.753459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.263 [2024-07-23 10:54:38.753517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.263 [2024-07-23 10:54:38.753535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.263 [2024-07-23 10:54:38.753798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.263 [2024-07-23 10:54:38.754065] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.263 [2024-07-23 10:54:38.754088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.263 [2024-07-23 10:54:38.754103] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.263 [2024-07-23 10:54:38.758175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.767413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.767910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.767977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.767996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.768266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.768548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.768572] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.768588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.772668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.781812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.782368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.782410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.782429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.782711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.782993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.783015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.783030] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.787118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.796228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.796690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.796743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.796760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.797025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.797293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.797316] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.797331] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.801377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.810785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.811264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.811306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.811324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.811614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.811884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.811907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.811922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.816022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.825195] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.825717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.825767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.825784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.826055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.826331] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.826353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.826369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.830427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.839590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.840106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.840147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.840167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.840437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.840718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.840741] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.840756] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.844846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.854023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.854541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.854572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.854590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.854855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.855126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.855149] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.522 [2024-07-23 10:54:38.855170] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.522 [2024-07-23 10:54:38.859237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.522 [2024-07-23 10:54:38.868617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.522 [2024-07-23 10:54:38.869112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.522 [2024-07-23 10:54:38.869153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.522 [2024-07-23 10:54:38.869171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.522 [2024-07-23 10:54:38.869442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.522 [2024-07-23 10:54:38.869722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.522 [2024-07-23 10:54:38.869745] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.869761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.873846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.882958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.883459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.883507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.883527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.883804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.884079] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.884101] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.884117] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.888188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.897576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.898128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.898181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.898200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.898470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.898752] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.898774] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.898790] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.902866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.912008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.912504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.912535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.912552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.912823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.913091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.913114] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.913129] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.917181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.926546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.927002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.927074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.927093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.927369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.927650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.927673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.927689] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.931798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.941023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.941501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.941549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.941567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.941831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.942099] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.942122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.942143] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.946229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.955574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.956082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.956123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.956142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.956418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.956710] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.956734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.956750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.960807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.969963] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.970523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.970565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.970584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.970854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.971129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.971151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.971167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.975241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.984384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.984898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.984945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.984963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.985228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.985506] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.985529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.985545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:38.989627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:38.998784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:38.999286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.523 [2024-07-23 10:54:38.999336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.523 [2024-07-23 10:54:38.999353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.523 [2024-07-23 10:54:38.999627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.523 [2024-07-23 10:54:38.999895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.523 [2024-07-23 10:54:38.999918] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.523 [2024-07-23 10:54:38.999933] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.523 [2024-07-23 10:54:39.004023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.523 [2024-07-23 10:54:39.013199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.523 [2024-07-23 10:54:39.013746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.524 [2024-07-23 10:54:39.013787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.524 [2024-07-23 10:54:39.013806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.524 [2024-07-23 10:54:39.014077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.524 [2024-07-23 10:54:39.014345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.524 [2024-07-23 10:54:39.014368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.524 [2024-07-23 10:54:39.014384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.524 [2024-07-23 10:54:39.018450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.027713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.028215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.028264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.028282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.028573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.028853] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.028877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.028892] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.032969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.042088] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.042624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.042681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.042700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.042970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.043239] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.043262] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.043277] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.047349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.056563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.057026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.057077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.057100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.057371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.057661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.057685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.057707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.061796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.070985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.071352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.071383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.071401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.071681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.071956] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.071978] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.071994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.076047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.085444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.085929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.085970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.085989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.086260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.086542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.086565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.086581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.090680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.099838] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.100352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.100393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.100412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.100696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.100977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.101000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.101015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.105067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.114208] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.114715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.114756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.114775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.115046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.115315] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.115338] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.115353] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.119440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.128613] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.129172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.129214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.129232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.129516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.129791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.129813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.129828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.133896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.143014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.783 [2024-07-23 10:54:39.143525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.783 [2024-07-23 10:54:39.143602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.783 [2024-07-23 10:54:39.143621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.783 [2024-07-23 10:54:39.143892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.783 [2024-07-23 10:54:39.144167] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.783 [2024-07-23 10:54:39.144189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.783 [2024-07-23 10:54:39.144205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.783 [2024-07-23 10:54:39.148279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.783 [2024-07-23 10:54:39.157466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.158004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.158059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.158078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.158354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.158636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.158659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.158675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.162741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.171869] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.172298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.172350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.172368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.172655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.172925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.172955] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.172970] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.177039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.186419] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.187013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.187054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.187073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.187343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.187631] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.187654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.187669] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.191777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.200964] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.201455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.201492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.201520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.201785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.202059] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.202082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.202097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.206174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.215584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.216084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.216134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.216151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.216414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.216691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.216714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.216729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.220794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.230154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.230573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.230614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.230633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.230904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.231179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.231201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.231216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.235439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.244598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.245113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.245154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.245172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.245443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.245724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.245753] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.245769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.249865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.259038] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.259541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.259583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.259602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.259872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.260141] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.260164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.260179] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.264273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.784 [2024-07-23 10:54:39.273402] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.784 [2024-07-23 10:54:39.273899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.784 [2024-07-23 10:54:39.273939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:50.784 [2024-07-23 10:54:39.273958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:50.784 [2024-07-23 10:54:39.274229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:50.784 [2024-07-23 10:54:39.274512] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.784 [2024-07-23 10:54:39.274535] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.784 [2024-07-23 10:54:39.274551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.784 [2024-07-23 10:54:39.278650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.043 [2024-07-23 10:54:39.287872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.043 [2024-07-23 10:54:39.288342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.043 [2024-07-23 10:54:39.288378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.043 [2024-07-23 10:54:39.288397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.043 [2024-07-23 10:54:39.288673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.043 [2024-07-23 10:54:39.288962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.043 [2024-07-23 10:54:39.288986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.043 [2024-07-23 10:54:39.289001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.043 [2024-07-23 10:54:39.293111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.043 [2024-07-23 10:54:39.302217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.043 [2024-07-23 10:54:39.302715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.043 [2024-07-23 10:54:39.302795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.043 [2024-07-23 10:54:39.302815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.043 [2024-07-23 10:54:39.303085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.043 [2024-07-23 10:54:39.303363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.043 [2024-07-23 10:54:39.303385] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.043 [2024-07-23 10:54:39.303400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.043 [2024-07-23 10:54:39.307484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.043 [2024-07-23 10:54:39.316644] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.043 [2024-07-23 10:54:39.317193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.043 [2024-07-23 10:54:39.317235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.043 [2024-07-23 10:54:39.317253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.043 [2024-07-23 10:54:39.317537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.043 [2024-07-23 10:54:39.317807] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.043 [2024-07-23 10:54:39.317829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.043 [2024-07-23 10:54:39.317846] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.043 [2024-07-23 10:54:39.321951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.331149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.331707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.331748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.331767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.332038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.332312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.332334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.332350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.336445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.345627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.346136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.346166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.346184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.346457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.346734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.346757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.346773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.350855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.360007] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.360552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.360596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.360615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.360886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.361161] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.361183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.361199] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.365288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.374446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.374913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.374963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.374981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.375245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.375521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.375544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.375560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.379651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.388846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.389334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.389384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.389401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.389675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.389944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.389967] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.389990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.394061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.403393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.403827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.403871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.403889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.404154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.404422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.404444] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.404461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.408533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.417908] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.418326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.418402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.418420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.418694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.418962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.418985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.419000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.423058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.432385] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.432900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.432929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.432946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.433210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.433478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.433509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.433525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.437576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.446769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.447307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.447368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.447388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.447670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.447945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.447968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.447983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.452056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.044 [2024-07-23 10:54:39.461235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.044 [2024-07-23 10:54:39.461760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.044 [2024-07-23 10:54:39.461801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.044 [2024-07-23 10:54:39.461820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.044 [2024-07-23 10:54:39.462090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.044 [2024-07-23 10:54:39.462359] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.044 [2024-07-23 10:54:39.462381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.044 [2024-07-23 10:54:39.462397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.044 [2024-07-23 10:54:39.466462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.045 [2024-07-23 10:54:39.475656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.045 [2024-07-23 10:54:39.476102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.045 [2024-07-23 10:54:39.476151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.045 [2024-07-23 10:54:39.476168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.045 [2024-07-23 10:54:39.476432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.045 [2024-07-23 10:54:39.476715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.045 [2024-07-23 10:54:39.476738] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.045 [2024-07-23 10:54:39.476754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.045 [2024-07-23 10:54:39.480827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.045 [2024-07-23 10:54:39.490200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.045 [2024-07-23 10:54:39.490690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.045 [2024-07-23 10:54:39.490774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.045 [2024-07-23 10:54:39.490792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.045 [2024-07-23 10:54:39.491056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.045 [2024-07-23 10:54:39.491330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.045 [2024-07-23 10:54:39.491353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.045 [2024-07-23 10:54:39.491368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.045 [2024-07-23 10:54:39.495432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.045 [2024-07-23 10:54:39.504784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.045 [2024-07-23 10:54:39.505306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.045 [2024-07-23 10:54:39.505360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.045 [2024-07-23 10:54:39.505379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.045 [2024-07-23 10:54:39.505661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.045 [2024-07-23 10:54:39.505936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.045 [2024-07-23 10:54:39.505959] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.045 [2024-07-23 10:54:39.505974] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.045 [2024-07-23 10:54:39.510020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.045 [2024-07-23 10:54:39.519127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.045 [2024-07-23 10:54:39.519752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.045 [2024-07-23 10:54:39.519794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.045 [2024-07-23 10:54:39.519813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.045 [2024-07-23 10:54:39.520084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.045 [2024-07-23 10:54:39.520353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.045 [2024-07-23 10:54:39.520376] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.045 [2024-07-23 10:54:39.520391] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.045 [2024-07-23 10:54:39.524454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.045 [2024-07-23 10:54:39.533570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.045 [2024-07-23 10:54:39.534093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.045 [2024-07-23 10:54:39.534140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.045 [2024-07-23 10:54:39.534158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.045 [2024-07-23 10:54:39.534422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.045 [2024-07-23 10:54:39.534698] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.045 [2024-07-23 10:54:39.534721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.045 [2024-07-23 10:54:39.534736] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.045 [2024-07-23 10:54:39.538814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.304 [2024-07-23 10:54:39.548039] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.304 [2024-07-23 10:54:39.548525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.304 [2024-07-23 10:54:39.548556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.304 [2024-07-23 10:54:39.548574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.304 [2024-07-23 10:54:39.548838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.304 [2024-07-23 10:54:39.549117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.304 [2024-07-23 10:54:39.549141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.304 [2024-07-23 10:54:39.549156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.304 [2024-07-23 10:54:39.553240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.304 [2024-07-23 10:54:39.562403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.304 [2024-07-23 10:54:39.562844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.304 [2024-07-23 10:54:39.562886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.304 [2024-07-23 10:54:39.562904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.304 [2024-07-23 10:54:39.563175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.304 [2024-07-23 10:54:39.563449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.304 [2024-07-23 10:54:39.563472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.304 [2024-07-23 10:54:39.563500] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.304 [2024-07-23 10:54:39.567585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.304 [2024-07-23 10:54:39.576799] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.304 [2024-07-23 10:54:39.577281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.304 [2024-07-23 10:54:39.577321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.304 [2024-07-23 10:54:39.577340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.304 [2024-07-23 10:54:39.577624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.304 [2024-07-23 10:54:39.577893] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.304 [2024-07-23 10:54:39.577916] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.304 [2024-07-23 10:54:39.577932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.304 [2024-07-23 10:54:39.582011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.304 [2024-07-23 10:54:39.591183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.304 [2024-07-23 10:54:39.591589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.304 [2024-07-23 10:54:39.591620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.304 [2024-07-23 10:54:39.591644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.304 [2024-07-23 10:54:39.591909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.592177] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.592199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.592215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.596300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.605744] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.606203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.606254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.606271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.606544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.606812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.606835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.606849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.610947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.620148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.620687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.620741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.620760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.621030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.621299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.621322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.621337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.625413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.634649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.635171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.635226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.635245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.635529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.635799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.635835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.635852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.639944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.649165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.649689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.649730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.649749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.650020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.650293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.650315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.650331] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.654443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.663641] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.664223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.664264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.664284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.664568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.664837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.664859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.664875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.668959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.678140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.678675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.678716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.678735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.679005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.679281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.679303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.679319] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.683411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.692610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.693131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.693180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.693197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.693461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.305 [2024-07-23 10:54:39.693739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.305 [2024-07-23 10:54:39.693762] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.305 [2024-07-23 10:54:39.693778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.305 [2024-07-23 10:54:39.697893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.305 [2024-07-23 10:54:39.707118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.305 [2024-07-23 10:54:39.707584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.305 [2024-07-23 10:54:39.707665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.305 [2024-07-23 10:54:39.707683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.305 [2024-07-23 10:54:39.707946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.708214] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.708237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.708252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.712350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.721671] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.722159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.722189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.722206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.722469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.722749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.722772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.722787] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.726840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.736172] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.736711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.736752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.736779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.737051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.737321] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.737344] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.737359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.741438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.750583] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.751105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.751199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.751218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.751504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.751780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.751802] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.751817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.755913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.765004] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.765515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.765557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.765576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.765846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.766121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.766144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.766159] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.770217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.779562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.780112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.780154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.780173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.780444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.780724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.780756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.780773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.784838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.306 [2024-07-23 10:54:39.793966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.306 [2024-07-23 10:54:39.794495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.306 [2024-07-23 10:54:39.794536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.306 [2024-07-23 10:54:39.794556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.306 [2024-07-23 10:54:39.794827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.306 [2024-07-23 10:54:39.795095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.306 [2024-07-23 10:54:39.795118] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.306 [2024-07-23 10:54:39.795133] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.306 [2024-07-23 10:54:39.799183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.808422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.808906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.808956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.808974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.809239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.809520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.809544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.809559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.813668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.822818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.823312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.823363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.823380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.823654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.823923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.823946] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.823961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.828038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.837422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.837942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.837992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.838009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.838272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.838557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.838580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.838596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.842709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.851919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.852360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.852410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.852427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.852700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.852968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.852990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.853006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.857115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.866413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.866938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.866986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.867003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.867266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.867552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.867575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.867590] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.871727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.880934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.881374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.566 [2024-07-23 10:54:39.881403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.566 [2024-07-23 10:54:39.881420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.566 [2024-07-23 10:54:39.881700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.566 [2024-07-23 10:54:39.881968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.566 [2024-07-23 10:54:39.881990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.566 [2024-07-23 10:54:39.882005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.566 [2024-07-23 10:54:39.886129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.566 [2024-07-23 10:54:39.895304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.566 [2024-07-23 10:54:39.895817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.895860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.895879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.896149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.896424] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.896446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.896463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.900575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.909737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.910290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.910332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.910351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.910640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.910916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.910938] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.910953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.915045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.924235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.924767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.924823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.924842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.925112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.925381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.925404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.925437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.929572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.938763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.939328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.939369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.939388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.939677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.939953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.939976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.939991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.944068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.953268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.953808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.953863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.953880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.954144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.954418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.954441] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.954456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.958560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.967798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.968223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.968274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.968290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.968567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.968835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.968858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.968874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.972920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.982313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.982781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.982839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.982857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.983121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.983388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.983410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.983426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:39.987539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:39.996876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:39.997340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:39.997389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:39.997406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:39.997682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:39.997951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:39.997973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:39.997988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:40.002271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:40.011877] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:40.013384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:40.013434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:40.013455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:40.013757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:40.014047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:40.014072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:40.014090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:40.018165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:40.026314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:40.026917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.567 [2024-07-23 10:54:40.026983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.567 [2024-07-23 10:54:40.027002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.567 [2024-07-23 10:54:40.027274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.567 [2024-07-23 10:54:40.027571] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.567 [2024-07-23 10:54:40.027595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.567 [2024-07-23 10:54:40.027613] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.567 [2024-07-23 10:54:40.031684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.567 [2024-07-23 10:54:40.040793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.567 [2024-07-23 10:54:40.041309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.568 [2024-07-23 10:54:40.041343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.568 [2024-07-23 10:54:40.041362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.568 [2024-07-23 10:54:40.041638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.568 [2024-07-23 10:54:40.041908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.568 [2024-07-23 10:54:40.041930] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.568 [2024-07-23 10:54:40.041946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.568 [2024-07-23 10:54:40.046014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.568 [2024-07-23 10:54:40.055348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.568 [2024-07-23 10:54:40.055922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.568 [2024-07-23 10:54:40.055967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.568 [2024-07-23 10:54:40.055987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.568 [2024-07-23 10:54:40.056259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.568 [2024-07-23 10:54:40.056543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.568 [2024-07-23 10:54:40.056566] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.568 [2024-07-23 10:54:40.056582] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.568 [2024-07-23 10:54:40.060665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.827 [2024-07-23 10:54:40.069878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.827 [2024-07-23 10:54:40.070395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.827 [2024-07-23 10:54:40.070445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.827 [2024-07-23 10:54:40.070462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.827 [2024-07-23 10:54:40.070736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.827 [2024-07-23 10:54:40.071011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.827 [2024-07-23 10:54:40.071034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.827 [2024-07-23 10:54:40.071049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.827 [2024-07-23 10:54:40.075153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.827 [2024-07-23 10:54:40.084376] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.827 [2024-07-23 10:54:40.084919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.827 [2024-07-23 10:54:40.084960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.827 [2024-07-23 10:54:40.084979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.827 [2024-07-23 10:54:40.085250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.827 [2024-07-23 10:54:40.085534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.827 [2024-07-23 10:54:40.085560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.827 [2024-07-23 10:54:40.085576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.827 [2024-07-23 10:54:40.089692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.827 [2024-07-23 10:54:40.098870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.827 [2024-07-23 10:54:40.099341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.827 [2024-07-23 10:54:40.099393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.827 [2024-07-23 10:54:40.099410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.827 [2024-07-23 10:54:40.099685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.827 [2024-07-23 10:54:40.099953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.827 [2024-07-23 10:54:40.099976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.827 [2024-07-23 10:54:40.099992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.827 [2024-07-23 10:54:40.104057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.827 [2024-07-23 10:54:40.113518] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.827 [2024-07-23 10:54:40.114006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.827 [2024-07-23 10:54:40.114056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.827 [2024-07-23 10:54:40.114074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.827 [2024-07-23 10:54:40.114337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.827 [2024-07-23 10:54:40.114617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.827 [2024-07-23 10:54:40.114640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.827 [2024-07-23 10:54:40.114655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.827 [2024-07-23 10:54:40.118720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.827 [2024-07-23 10:54:40.127954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.827 [2024-07-23 10:54:40.128449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.827 [2024-07-23 10:54:40.128507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.827 [2024-07-23 10:54:40.128531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.128796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.129064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.129086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.129101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.133178] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.142351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.142904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.142946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.142965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.143235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.143522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.143546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.143561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.147689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.156909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.157408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.157457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.157474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.157750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.158018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.158040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.158056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.162139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.171358] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.171856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.171897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.171917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.172187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.172461] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.172508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.172525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3948057 Killed "${NVMF_APP[@]}" "$@" 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.828 [2024-07-23 10:54:40.176597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3948809 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3948809 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3948809 ']' 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.828 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.828 [2024-07-23 10:54:40.185940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.186324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.186357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.186375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.186658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.186929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.186951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.186967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.191016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.200376] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.200792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.200823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.200841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.201105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.201374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.201403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.201419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.205473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.214823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.215287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.215320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.215338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.215614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.215883] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.215906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.215922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.219983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.828 [2024-07-23 10:54:40.229349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.828 [2024-07-23 10:54:40.229635] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:51.828 [2024-07-23 10:54:40.229703] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.828 [2024-07-23 10:54:40.229801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.828 [2024-07-23 10:54:40.229844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.828 [2024-07-23 10:54:40.229864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.828 [2024-07-23 10:54:40.230138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.828 [2024-07-23 10:54:40.230407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.828 [2024-07-23 10:54:40.230430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.828 [2024-07-23 10:54:40.230447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.828 [2024-07-23 10:54:40.234512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 [2024-07-23 10:54:40.243880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.244277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.244308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.244326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.244612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.244881] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.244903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.244927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.248996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 [2024-07-23 10:54:40.258255] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.258703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.258734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.258752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.259016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.259285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.259307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.259323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.263374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.829 [2024-07-23 10:54:40.272724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.273138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.273169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.273187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.273451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.273728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.273752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.273767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.277816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 [2024-07-23 10:54:40.287129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.287518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.287549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.287566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.287830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.288107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.288129] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.288145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.292198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 [2024-07-23 10:54:40.296895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.829 [2024-07-23 10:54:40.301604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.302170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.302211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.302230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.302517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.302792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.302815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.302832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.306957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.829 [2024-07-23 10:54:40.316134] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.829 [2024-07-23 10:54:40.316728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.829 [2024-07-23 10:54:40.316786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:51.829 [2024-07-23 10:54:40.316808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:51.829 [2024-07-23 10:54:40.317090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:51.829 [2024-07-23 10:54:40.317363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.829 [2024-07-23 10:54:40.317386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.829 [2024-07-23 10:54:40.317404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.829 [2024-07-23 10:54:40.321463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.088 [2024-07-23 10:54:40.330704] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.088 [2024-07-23 10:54:40.331263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.088 [2024-07-23 10:54:40.331317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.088 [2024-07-23 10:54:40.331339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.088 [2024-07-23 10:54:40.331628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.088 [2024-07-23 10:54:40.331902] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.088 [2024-07-23 10:54:40.331926] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.088 [2024-07-23 10:54:40.331943] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.088 [2024-07-23 10:54:40.336056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.088 [2024-07-23 10:54:40.345186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.088 [2024-07-23 10:54:40.345804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.088 [2024-07-23 10:54:40.345850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.088 [2024-07-23 10:54:40.345871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.088 [2024-07-23 10:54:40.346164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.088 [2024-07-23 10:54:40.346446] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.088 [2024-07-23 10:54:40.346469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.088 [2024-07-23 10:54:40.346497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.088 [2024-07-23 10:54:40.350616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.088 [2024-07-23 10:54:40.359768] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.088 [2024-07-23 10:54:40.360291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.088 [2024-07-23 10:54:40.360334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.088 [2024-07-23 10:54:40.360354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.088 [2024-07-23 10:54:40.360635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.088 [2024-07-23 10:54:40.360908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.088 [2024-07-23 10:54:40.360931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.088 [2024-07-23 10:54:40.360949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.088 [2024-07-23 10:54:40.365005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.088 [2024-07-23 10:54:40.374346] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.088 [2024-07-23 10:54:40.374880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.088 [2024-07-23 10:54:40.374921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.088 [2024-07-23 10:54:40.374941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.088 [2024-07-23 10:54:40.375212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.088 [2024-07-23 10:54:40.375493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.088 [2024-07-23 10:54:40.375517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.088 [2024-07-23 10:54:40.375534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.088 [2024-07-23 10:54:40.379587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.088 [2024-07-23 10:54:40.384387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.089 [2024-07-23 10:54:40.384424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.089 [2024-07-23 10:54:40.384448] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.089 [2024-07-23 10:54:40.384470] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.089 [2024-07-23 10:54:40.384497] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.089 [2024-07-23 10:54:40.384608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.089 [2024-07-23 10:54:40.384664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:52.089 [2024-07-23 10:54:40.384673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.089 [2024-07-23 10:54:40.388919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.389497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.389540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.389561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.389837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.390111] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.390134] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.390152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.394260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.403523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.404102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.404146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.404166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.404441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.404725] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.404749] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.404766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.408890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.418143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.418725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.418769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.418789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.419066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.419338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.419361] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.419380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.423495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.432695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.433249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.433290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.433311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.433603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.433876] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.433899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.433917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.438031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.447252] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.447842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.447886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.447907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.448182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.448457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.448490] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.448510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.452592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.461715] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.462210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.462252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.462272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.462564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.462836] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.462859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.462876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.466943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 [2024-07-23 10:54:40.476287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.476693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.476723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.476741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.477004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.477272] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.477295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.477321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.481383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.089 [2024-07-23 10:54:40.490727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 [2024-07-23 10:54:40.491098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.089 [2024-07-23 10:54:40.491127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.089 [2024-07-23 10:54:40.491145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.089 [2024-07-23 10:54:40.491410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.089 [2024-07-23 10:54:40.491687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.089 [2024-07-23 10:54:40.491710] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.089 [2024-07-23 10:54:40.491726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.089 [2024-07-23 10:54:40.495792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.089 [2024-07-23 10:54:40.505151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.089 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.089 [2024-07-23 10:54:40.505570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.090 [2024-07-23 10:54:40.505614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.090 [2024-07-23 10:54:40.505633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.090 [2024-07-23 10:54:40.505906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.090 [2024-07-23 10:54:40.506181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.090 [2024-07-23 10:54:40.506204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.090 [2024-07-23 10:54:40.506220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.090 [2024-07-23 10:54:40.508672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.090 [2024-07-23 10:54:40.510270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 [2024-07-23 10:54:40.519611] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.090 [2024-07-23 10:54:40.520094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.090 [2024-07-23 10:54:40.520136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.090 [2024-07-23 10:54:40.520156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.090 [2024-07-23 10:54:40.520427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.090 [2024-07-23 10:54:40.520707] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.090 [2024-07-23 10:54:40.520731] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.090 [2024-07-23 10:54:40.520747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.090 [2024-07-23 10:54:40.524820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.090 [2024-07-23 10:54:40.534183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.090 [2024-07-23 10:54:40.534782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.090 [2024-07-23 10:54:40.534840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.090 [2024-07-23 10:54:40.534861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.090 [2024-07-23 10:54:40.535145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.090 [2024-07-23 10:54:40.535420] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.090 [2024-07-23 10:54:40.535443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.090 [2024-07-23 10:54:40.535461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.090 [2024-07-23 10:54:40.539576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.090 Malloc0 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 [2024-07-23 10:54:40.548845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.090 [2024-07-23 10:54:40.549372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.090 [2024-07-23 10:54:40.549409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.090 [2024-07-23 10:54:40.549429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.090 [2024-07-23 10:54:40.549711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.090 [2024-07-23 10:54:40.549992] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.090 [2024-07-23 10:54:40.550016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.090 [2024-07-23 10:54:40.550033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.090 [2024-07-23 10:54:40.554091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 [2024-07-23 10:54:40.563460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.090 [2024-07-23 10:54:40.563868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.090 [2024-07-23 10:54:40.563898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1756950 with addr=10.0.0.2, port=4420 00:33:52.090 [2024-07-23 10:54:40.563916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756950 is same with the state(5) to be set 00:33:52.090 [2024-07-23 10:54:40.564180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756950 (9): Bad file descriptor 00:33:52.090 [2024-07-23 10:54:40.564448] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.090 [2024-07-23 10:54:40.564472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.090 [2024-07-23 10:54:40.564497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.090 [2024-07-23 10:54:40.566565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.090 [2024-07-23 10:54:40.568561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.090 10:54:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3948217 00:33:52.090 [2024-07-23 10:54:40.577921] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.348 [2024-07-23 10:54:40.697169] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:02.311 00:34:02.311 Latency(us) 00:34:02.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:02.311 Verification LBA range: start 0x0 length 0x4000 00:34:02.311 Nvme1n1 : 15.02 5855.43 22.87 7484.22 0.00 9565.41 655.36 19709.35 00:34:02.311 =================================================================================================================== 00:34:02.311 Total : 5855.43 22.87 7484.22 0.00 9565.41 655.36 19709.35 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:02.311 rmmod nvme_tcp 00:34:02.311 rmmod nvme_fabrics 00:34:02.311 rmmod nvme_keyring 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:02.311 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3948809 ']' 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3948809 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3948809 ']' 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3948809 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3948809 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3948809' 00:34:02.312 killing process with pid 3948809 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3948809 00:34:02.312 10:54:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3948809 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:02.312 10:54:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.691 10:54:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:03.691 00:34:03.691 real 0m21.869s 00:34:03.691 user 0m59.311s 00:34:03.691 sys 0m4.006s 00:34:03.691 10:54:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:03.691 10:54:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.691 ************************************ 00:34:03.691 END TEST nvmf_bdevperf 00:34:03.691 ************************************ 00:34:03.949 10:54:52 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:03.949 10:54:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:03.949 10:54:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:03.949 10:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.950 ************************************ 00:34:03.950 START TEST nvmf_target_disconnect 00:34:03.950 ************************************ 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:03.950 * Looking for test storage... 00:34:03.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:03.950 10:54:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:05.855 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:05.856 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:05.856 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:05.856 Found net devices under 0000:08:00.0: cvl_0_0 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:05.856 Found net devices under 0000:08:00.1: cvl_0_1 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.856 10:54:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:05.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:34:05.856 00:34:05.856 --- 10.0.0.2 ping statistics --- 00:34:05.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.856 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:34:05.856 00:34:05.856 --- 10.0.0.1 ping statistics --- 00:34:05.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.856 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:05.856 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:05.856 ************************************ 00:34:05.856 START TEST nvmf_target_disconnect_tc1 00:34:05.856 ************************************ 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:05.857 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.857 [2024-07-23 10:54:54.220341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.857 [2024-07-23 10:54:54.220434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21565f0 with addr=10.0.0.2, port=4420 00:34:05.857 [2024-07-23 10:54:54.220469] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:05.857 [2024-07-23 10:54:54.220501] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:05.857 [2024-07-23 10:54:54.220523] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:05.857 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:05.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:05.857 Initializing NVMe Controllers 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:05.857 00:34:05.857 real 0m0.090s 00:34:05.857 user 0m0.043s 00:34:05.857 sys 0m0.047s 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 ************************************ 00:34:05.857 END TEST nvmf_target_disconnect_tc1 00:34:05.857 ************************************ 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 ************************************ 00:34:05.857 START TEST nvmf_target_disconnect_tc2 00:34:05.857 ************************************ 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3951233 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3951233 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3951233 ']' 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:05.857 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 [2024-07-23 10:54:54.338609] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:05.857 [2024-07-23 10:54:54.338703] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.115 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.115 [2024-07-23 10:54:54.403575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.115 [2024-07-23 10:54:54.492536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.115 [2024-07-23 10:54:54.492601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.115 [2024-07-23 10:54:54.492617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.115 [2024-07-23 10:54:54.492631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.115 [2024-07-23 10:54:54.492643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.115 [2024-07-23 10:54:54.492769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:06.115 [2024-07-23 10:54:54.492849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:06.115 [2024-07-23 10:54:54.493000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:06.115 [2024-07-23 10:54:54.493008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:06.115 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:06.115 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:06.115 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:06.115 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.115 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 Malloc0 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 [2024-07-23 10:54:54.645489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 [2024-07-23 10:54:54.673741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3951266 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:06.373 10:54:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:06.373 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.283 10:54:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3951233 00:34:08.283 10:54:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Write completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.283 Read completed with error (sct=0, sc=8) 00:34:08.283 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 [2024-07-23 10:54:56.699021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 [2024-07-23 10:54:56.699402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 [2024-07-23 10:54:56.699765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Read completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 Write completed with error (sct=0, sc=8) 00:34:08.284 starting I/O failed 00:34:08.284 [2024-07-23 10:54:56.700145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.284 [2024-07-23 10:54:56.700407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.284 [2024-07-23 10:54:56.700466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.284 qpair failed and we were unable to recover it. 00:34:08.284 [2024-07-23 10:54:56.700712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.284 [2024-07-23 10:54:56.700771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.284 qpair failed and we were unable to recover it. 00:34:08.284 [2024-07-23 10:54:56.701027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.284 [2024-07-23 10:54:56.701071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.701303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.701332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.701499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.701527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.701676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.701724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.701875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.701922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.702057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.702265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.702456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.702601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.702831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.702995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.703043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.703182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.703216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.703391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.703421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.703607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.703656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.703798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.703840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.703972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.704158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.704282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.704446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.704657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.704907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.704959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.705078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.705105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.705231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.705271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.705423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.705456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.705712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.705739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.705934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.705963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.706128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.706155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.706334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.706360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.706549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.706576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.706695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.706723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.706871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.706922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.707886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.285 [2024-07-23 10:54:56.707938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.285 qpair failed and we were unable to recover it. 00:34:08.285 [2024-07-23 10:54:56.708097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.708124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.708217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.708245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.708405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.708461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.708649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.708698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.708796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.708823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.708949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.709125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.709323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.709477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.709729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.709867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.709893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.710103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.710281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.710477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.710615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.710834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.710991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.711856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.711986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.712121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.712241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.712457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.712666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.712800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.712856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.713948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.713975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.714094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.286 [2024-07-23 10:54:56.714155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.286 qpair failed and we were unable to recover it. 00:34:08.286 [2024-07-23 10:54:56.714317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.714367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.714512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.714538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.714672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.714698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.714874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.714904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.715877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.715932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.716894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.716921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.717886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.717994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.718817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.718842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.719875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.719902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.720034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.720086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.720168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.720194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.720292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.287 [2024-07-23 10:54:56.720317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.287 qpair failed and we were unable to recover it. 00:34:08.287 [2024-07-23 10:54:56.720472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.720541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.720724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.720749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.720905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.721831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.721884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.722860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.722990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.723846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.723871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.724875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.724901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.725035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.725078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.725162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.725188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.288 [2024-07-23 10:54:56.725322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.288 [2024-07-23 10:54:56.725376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.288 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.725475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.725516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.725605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.725630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.725724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.725750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.725862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.725915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.726941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.726988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.727901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.727943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.728876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.728928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.729893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.729952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.730929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.730976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.289 [2024-07-23 10:54:56.731121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.289 [2024-07-23 10:54:56.731166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.289 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.731248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.731277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.731390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.731455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.731563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.731592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.731747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.731794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.731919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.731963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.732885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.732925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.733882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.733973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.734878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.734999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.735913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.735940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.736105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.736156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.736277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.736308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.736395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.736422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.736538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.736580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.290 qpair failed and we were unable to recover it. 00:34:08.290 [2024-07-23 10:54:56.736697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.290 [2024-07-23 10:54:56.736736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.736868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.736916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.737920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.737946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.738946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.738973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.739923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.739949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.740924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.740985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.741901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.741991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.742018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.742118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.742145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.742235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.742263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.742362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.291 [2024-07-23 10:54:56.742389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.291 qpair failed and we were unable to recover it. 00:34:08.291 [2024-07-23 10:54:56.742468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.742503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.742635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.742661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.742744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.742771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.742900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.742927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.743901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.743927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.744039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.744306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.744497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.744728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.744850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.744993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.745891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.745940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.746887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.746913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.747924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.747951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.292 qpair failed and we were unable to recover it. 00:34:08.292 [2024-07-23 10:54:56.748043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.292 [2024-07-23 10:54:56.748070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.748892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.748994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.749965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.749991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.750896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.750998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.293 [2024-07-23 10:54:56.751893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.293 [2024-07-23 10:54:56.751919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.293 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.752950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.752977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.753887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.753913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.754916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.754960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.755865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.755912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.756878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.756904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.294 qpair failed and we were unable to recover it. 00:34:08.294 [2024-07-23 10:54:56.757890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.294 [2024-07-23 10:54:56.757916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.758853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.758905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.759849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.759966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.760879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.760931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.761912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.761958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.762943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.762969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.763085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.763131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.763248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.763297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.295 [2024-07-23 10:54:56.763396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-23 10:54:56.763449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.295 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.763575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.763628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.763724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.763751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.763900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.763955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.764900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.764957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.765898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.765925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.766934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.766985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.767966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.767991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.768111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.768174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.768257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.768283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.768383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.768411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.768540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.768591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.296 qpair failed and we were unable to recover it. 00:34:08.296 [2024-07-23 10:54:56.768681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.296 [2024-07-23 10:54:56.768706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.768787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.768813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.768905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.768932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.769900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.769991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.770886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.770984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.771888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.771938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.772886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.772912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.773009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.773035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.773121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.773147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.297 qpair failed and we were unable to recover it. 00:34:08.297 [2024-07-23 10:54:56.773245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.297 [2024-07-23 10:54:56.773270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.773395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.773450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.773547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.773574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.773687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.773730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.773820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.773847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.773959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.774876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.774983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.775915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.775958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.776860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.776905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.298 [2024-07-23 10:54:56.777750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.298 [2024-07-23 10:54:56.777793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.298 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.777910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.777950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.778932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.778977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.779846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.779895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.780949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.780975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.585 [2024-07-23 10:54:56.781887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.585 [2024-07-23 10:54:56.781913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.585 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.782907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.782995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.783897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.783923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.784920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.784946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.785906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.785933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.786841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.786954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.586 [2024-07-23 10:54:56.787000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.586 qpair failed and we were unable to recover it. 00:34:08.586 [2024-07-23 10:54:56.787088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.787872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.787969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.788932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.788979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.789930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.789955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.790892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.790939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.791966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.791994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.792073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.792099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.792194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.587 [2024-07-23 10:54:56.792224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.587 qpair failed and we were unable to recover it. 00:34:08.587 [2024-07-23 10:54:56.792316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.792343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.792439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.792466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.792564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.792590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.792690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.792737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.792863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.792933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.793944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.793969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.794961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.794988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.795968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.795994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.796869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.796914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.797001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.797031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.797120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.797148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.797241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.797268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.797389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.797457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.588 qpair failed and we were unable to recover it. 00:34:08.588 [2024-07-23 10:54:56.797599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.588 [2024-07-23 10:54:56.797643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.797727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.797754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.797852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.797890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.797983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.798844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.798881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.799905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.799940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.800901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.800928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.801873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.801926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.802056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.802100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.589 [2024-07-23 10:54:56.802215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.589 [2024-07-23 10:54:56.802255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.589 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.802369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.802406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.802515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.802541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.802671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.802707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.802823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.802869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.802972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.803912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.803992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.804130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.804316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.804524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.804672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.804841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.804921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.805895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.805980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.806883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.806910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.807044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.807164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.807355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.807528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.590 [2024-07-23 10:54:56.807641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.590 qpair failed and we were unable to recover it. 00:34:08.590 [2024-07-23 10:54:56.807722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.807748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.807853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.807879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.807967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.808915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.808959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.809795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.809865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.810862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.810979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.811877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.811991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.812949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.812993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.813092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.813128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.813279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.591 [2024-07-23 10:54:56.813378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.591 qpair failed and we were unable to recover it. 00:34:08.591 [2024-07-23 10:54:56.813553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.813581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.813702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.813747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.813851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.813898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.814921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.814974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.815894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.815938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.816953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.816997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.817929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.817955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.592 qpair failed and we were unable to recover it. 00:34:08.592 [2024-07-23 10:54:56.818963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.592 [2024-07-23 10:54:56.818989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.819878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.819910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.820919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.820961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.821821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.821880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.822927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.822977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.823881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.823907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.824000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.824031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.824140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.824167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.593 [2024-07-23 10:54:56.824249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.593 [2024-07-23 10:54:56.824274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.593 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.824387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.824413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.824528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.824560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.824697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.824741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.824868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.824900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.825903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.825986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.826925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.826952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.827868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.827970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.828014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.828102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.594 [2024-07-23 10:54:56.828130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.594 qpair failed and we were unable to recover it. 00:34:08.594 [2024-07-23 10:54:56.828219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.828245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.828322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.828348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.828439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.828471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.828681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.828724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.828805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.828831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.828992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.829789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.829961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.830898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.830991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.831971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.831997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.832142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.832206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.832400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.832434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.832591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.832653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.832757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.832802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.832888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.832913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.833042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.833086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.833200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.833243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.595 [2024-07-23 10:54:56.833355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.595 [2024-07-23 10:54:56.833403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.595 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.833493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.833522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.833634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.833675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.833798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.833846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.833954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.833998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.834859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.834967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.835933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.835959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.836838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.836870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.837912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.837954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.838082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.838114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.838234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.838266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.838371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.838396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.596 qpair failed and we were unable to recover it. 00:34:08.596 [2024-07-23 10:54:56.838488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.596 [2024-07-23 10:54:56.838515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.838610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.838635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.838788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.838854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.838969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.839095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.839239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.839503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.839727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.839948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.839993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.840192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.840250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.840331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.840357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.840533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.840582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.840709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.840750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.840885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.840943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.841872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.841903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.842934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.842994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.843879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.843922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.597 [2024-07-23 10:54:56.844019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.597 [2024-07-23 10:54:56.844049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.597 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.844864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.844984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.845878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.845981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.846895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.847894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.847987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.848852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.848895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.849039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.849091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.849176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.849203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.849299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.849328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.849419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.598 [2024-07-23 10:54:56.849447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.598 qpair failed and we were unable to recover it. 00:34:08.598 [2024-07-23 10:54:56.849614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.849671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.849770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.849802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.849902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.849936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.850823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.850878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.851896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.851937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.852908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.852990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.599 [2024-07-23 10:54:56.853713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.599 qpair failed and we were unable to recover it. 00:34:08.599 [2024-07-23 10:54:56.853807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.853838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.853980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.854964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.854991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.855944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.855986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.856911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.856955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.857912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.857964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.858841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.858871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.859001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.859045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.859156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.600 [2024-07-23 10:54:56.859188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.600 qpair failed and we were unable to recover it. 00:34:08.600 [2024-07-23 10:54:56.859288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.859316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.859492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.859552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.859678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.859721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.859835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.859862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.859972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.860943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.860976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.861962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.861991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.862108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.862149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.862277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.862318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.862427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.862491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.862680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.862733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.862859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.862919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.863799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.863970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.864154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.864347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.864587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.864726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.864862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.864903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.865014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.865055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.865156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.865189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.601 [2024-07-23 10:54:56.865302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.601 [2024-07-23 10:54:56.865333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.601 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.865561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.865619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.865703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.865729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.865840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.865882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.865993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.866035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.866168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.866233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.866466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.866524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.866685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.866741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.866851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.866879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.866971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.867816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.867995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.868937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.868979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.869862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.869985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.602 qpair failed and we were unable to recover it. 00:34:08.602 [2024-07-23 10:54:56.870805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.602 [2024-07-23 10:54:56.870846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.870954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.870997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.871900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.871943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.872856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.872973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.873191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.873345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.873541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.873718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.873911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.873976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.874133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.874243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.874424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.874544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.874748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.874977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.875888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.875989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.876015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.876108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.876135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.876242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.876286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.876412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.876453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.603 [2024-07-23 10:54:56.876588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.603 [2024-07-23 10:54:56.876644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.603 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.876804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.876854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.876957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.876999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.877909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.877942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.878886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.878913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.879808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.879935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.880938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.880968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.881077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.881106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.604 [2024-07-23 10:54:56.881196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.604 [2024-07-23 10:54:56.881223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.604 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.881344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.881453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.881639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.881770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.881889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.881974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.882913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.882942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.883820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.883861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.884904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.884935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.885950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.885975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.886105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.886146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.886242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.886272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.886450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.886484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.886613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.605 [2024-07-23 10:54:56.886665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.605 qpair failed and we were unable to recover it. 00:34:08.605 [2024-07-23 10:54:56.886795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.886826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.886942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.886985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.887145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.887208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.887435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.887502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.887607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.887638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.887734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.887761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.887889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.887932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.888883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.888983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.889941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.889968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.890869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.890985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.891908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.606 [2024-07-23 10:54:56.891998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.606 [2024-07-23 10:54:56.892028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.606 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.892928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.892957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.893825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.893962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.894913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.894970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.895914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.895969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.896903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.896946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.897079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.897163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.897267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.897298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.897472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.607 [2024-07-23 10:54:56.897542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.607 qpair failed and we were unable to recover it. 00:34:08.607 [2024-07-23 10:54:56.897700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.897727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.897851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.897894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.897982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.898917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.898997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.899888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.899919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.900890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.900918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.901851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.901882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.902102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.902257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.902400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.902512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.608 [2024-07-23 10:54:56.902682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.608 qpair failed and we were unable to recover it. 00:34:08.608 [2024-07-23 10:54:56.902764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.902794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.902879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.902908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.902992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.903932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.903977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.904853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.904907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.905870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.905910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.906815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.906964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.907009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.907150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.907191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.907281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.907309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.907396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.609 [2024-07-23 10:54:56.907423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.609 qpair failed and we were unable to recover it. 00:34:08.609 [2024-07-23 10:54:56.907548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.907608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.907691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.907718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.907828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.907856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.907950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.907978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.908911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.908970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.909904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.909931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.910957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.910982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.911913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.911939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.912034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.912062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.912152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.912178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.912268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.610 [2024-07-23 10:54:56.912295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.610 qpair failed and we were unable to recover it. 00:34:08.610 [2024-07-23 10:54:56.912379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.912406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.912508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.912536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.912621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.912647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.912763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.912789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.912873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.912899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.913888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.913997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.914879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.914907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.915961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.915987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.916900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.611 [2024-07-23 10:54:56.917012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.611 qpair failed and we were unable to recover it. 00:34:08.611 [2024-07-23 10:54:56.917115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.917930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.917957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.918830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.918882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.919970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.919998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.920192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.920337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.920512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.920715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.920831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.920949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.921005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.921155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.921206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.921291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.921318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.921398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.921424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.921594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8e320 is same with the state(5) to be set 00:34:08.612 [2024-07-23 10:54:56.921804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.921868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.921977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.612 [2024-07-23 10:54:56.922770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.612 [2024-07-23 10:54:56.922796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.612 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.922952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.923891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.923922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.924938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.924981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.925927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.925967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.926887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.926914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.927835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.927879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.928018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.928099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.928204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.928248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.613 [2024-07-23 10:54:56.928344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.613 [2024-07-23 10:54:56.928371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.613 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.928476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.928525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.928681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.928708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.928797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.928825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.928947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.928987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.929881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.929995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.930932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.930963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.931131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.931261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.931387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.931632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.931842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.931978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.932996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.933106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.933148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.933275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.614 [2024-07-23 10:54:56.933318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.614 qpair failed and we were unable to recover it. 00:34:08.614 [2024-07-23 10:54:56.933403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.933429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.933539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.933570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.933715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.933779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.933909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.933971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.934917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.934948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.935967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.935998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.936862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.936904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.937965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.937996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.938923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.938980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.939084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.939127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.939208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.939235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.939352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.615 [2024-07-23 10:54:56.939411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.615 qpair failed and we were unable to recover it. 00:34:08.615 [2024-07-23 10:54:56.939507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.939538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.939617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.939644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.939760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.939787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.939877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.939904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.939998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.940929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.940956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.941933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.941975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.942822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.942970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.943905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.943993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.944105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.944224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.944356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.944536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.616 [2024-07-23 10:54:56.944692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.616 [2024-07-23 10:54:56.944721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.616 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.944813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.944840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.944923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.944949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.945931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.945999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.946917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.946960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.947898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.947957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.948880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.948924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.949860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.949903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.950011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.950041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.950185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.617 [2024-07-23 10:54:56.950226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.617 qpair failed and we were unable to recover it. 00:34:08.617 [2024-07-23 10:54:56.950343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.950385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.950504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.950545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.950666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.950708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.950817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.950850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.950954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.950983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.951869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.951895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.952907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.952934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.953891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.953975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.954091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.954206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.954322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.954441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.618 [2024-07-23 10:54:56.954586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.618 [2024-07-23 10:54:56.954614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.618 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.954708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.954751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.954860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.954909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.955892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.955985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.956880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.956990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.957948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.957978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.958930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.958970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.959075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.959104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.959221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.959250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.959342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.959368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.619 [2024-07-23 10:54:56.959461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.619 [2024-07-23 10:54:56.959499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.619 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.959612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.959642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.959763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.959805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.959904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.959932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.960926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.960956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.961914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.961942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.962863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.962945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.963902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.963946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.964910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.964940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.620 [2024-07-23 10:54:56.965062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.620 [2024-07-23 10:54:56.965092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.620 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.965237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.965428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.965640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.965763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.965895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.965994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.966913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.966997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.967958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.967987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.968962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.968988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.969932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.969973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.970070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.970099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.621 [2024-07-23 10:54:56.970221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.621 [2024-07-23 10:54:56.970262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.621 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.970359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.970386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.970490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.970537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.970635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.970664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.970785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.970827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.970936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.970978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.971911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.971940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.972886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.972978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.973905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.973935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.974870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.974979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.975021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.975121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.622 [2024-07-23 10:54:56.975149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.622 qpair failed and we were unable to recover it. 00:34:08.622 [2024-07-23 10:54:56.975265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.975391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.975510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.975644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.975799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.975947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.975986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.976885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.976913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.977908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.977933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.978918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.978946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.979036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.979065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.979157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.979185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.979275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.623 [2024-07-23 10:54:56.979301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.623 qpair failed and we were unable to recover it. 00:34:08.623 [2024-07-23 10:54:56.979396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.979424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.979534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.979562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.979648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.979674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.979753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.979779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.979869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.979895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.979988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.980895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.980921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.981841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.981869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.982891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.982917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.624 [2024-07-23 10:54:56.983859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.624 qpair failed and we were unable to recover it. 00:34:08.624 [2024-07-23 10:54:56.983947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.983971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.984909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.985959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.985986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.986913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.986939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.987969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.987996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.988099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.988128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.988221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.988247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.988331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.625 [2024-07-23 10:54:56.988358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.625 qpair failed and we were unable to recover it. 00:34:08.625 [2024-07-23 10:54:56.988468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.988534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.988625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.988652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.988731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.988758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.988841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.988872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.988956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.988983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.989926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.989955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.990888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.990982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.991959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.991986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.992909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.992936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.626 qpair failed and we were unable to recover it. 00:34:08.626 [2024-07-23 10:54:56.993020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.626 [2024-07-23 10:54:56.993046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.993888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.993915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.994960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.994987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.995961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.995988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.996952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.996979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.627 qpair failed and we were unable to recover it. 00:34:08.627 [2024-07-23 10:54:56.997624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.627 [2024-07-23 10:54:56.997650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.997742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.997768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.997852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.997879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.997961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.997986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.998911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.998937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:56.999960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:56.999985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.000911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.000992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.628 [2024-07-23 10:54:57.001894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.628 qpair failed and we were unable to recover it. 00:34:08.628 [2024-07-23 10:54:57.001992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.002914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.002940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.003964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.003990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.004878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.004975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.005904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.005930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.629 [2024-07-23 10:54:57.006634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.629 qpair failed and we were unable to recover it. 00:34:08.629 [2024-07-23 10:54:57.006718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.006744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.006829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.006857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.006940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.006966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.007942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.007968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.008951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.008977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.009960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.009989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.010908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.010994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.011020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.011104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.011130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.011213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.630 [2024-07-23 10:54:57.011238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.630 qpair failed and we were unable to recover it. 00:34:08.630 [2024-07-23 10:54:57.011329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.011444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.011567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.011686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.011814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.011925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.011953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.012885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.012977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.013910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.013990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.014936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.014965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.015946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.631 [2024-07-23 10:54:57.015976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.631 qpair failed and we were unable to recover it. 00:34:08.631 [2024-07-23 10:54:57.016061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.016924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.016950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.017043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.017071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.017156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.017182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.017268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.017295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.018911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.018939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.019903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.019931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.632 [2024-07-23 10:54:57.020821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.632 qpair failed and we were unable to recover it. 00:34:08.632 [2024-07-23 10:54:57.020913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.020939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.021901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.021928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.022887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.022973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.023941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.023968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.024052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.024079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.024168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.024196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.633 [2024-07-23 10:54:57.024280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.633 [2024-07-23 10:54:57.024309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.633 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.024431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.024554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.024677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.024797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.024917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.024999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.025937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.025964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.026908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.026937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.027958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.027986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.634 [2024-07-23 10:54:57.028740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.634 [2024-07-23 10:54:57.028767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.634 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.028864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.028891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.028971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.028997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.029946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.029975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.030895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.030921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.031946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.031972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.032959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.032987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.033095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.033122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.635 qpair failed and we were unable to recover it. 00:34:08.635 [2024-07-23 10:54:57.033214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.635 [2024-07-23 10:54:57.033240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.033933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.033959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.034956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.034982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.035921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.035948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.036948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.036974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.636 qpair failed and we were unable to recover it. 00:34:08.636 [2024-07-23 10:54:57.037684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.636 [2024-07-23 10:54:57.037714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.037806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.037833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.037920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.037947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.038945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.038974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.039901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.039993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.040958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.040985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.041078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.041105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.041194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.637 [2024-07-23 10:54:57.041222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.637 qpair failed and we were unable to recover it. 00:34:08.637 [2024-07-23 10:54:57.041310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.041419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.041543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.041696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.041823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.041931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.041957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.042883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.042978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.043879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.043979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.044971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.044998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.638 qpair failed and we were unable to recover it. 00:34:08.638 [2024-07-23 10:54:57.045824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.638 [2024-07-23 10:54:57.045850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.045943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.045973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.046908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.046935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.047891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.047986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.048917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.048944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.049897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.049923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.050013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.050039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.050134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.050165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.050278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.639 [2024-07-23 10:54:57.050316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.639 qpair failed and we were unable to recover it. 00:34:08.639 [2024-07-23 10:54:57.050412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.050439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.050546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.050574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.050664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.050693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.050780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.050807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.050886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.050913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.051913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.051938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.052895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.052976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.053934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.053962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.054914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.054940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.055029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.640 [2024-07-23 10:54:57.055055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.640 qpair failed and we were unable to recover it. 00:34:08.640 [2024-07-23 10:54:57.055146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.055901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.055983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.056965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.056992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.057888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.057916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.641 [2024-07-23 10:54:57.058837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.641 qpair failed and we were unable to recover it. 00:34:08.641 [2024-07-23 10:54:57.058936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.058965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.059968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.059994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.060098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.060137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.060235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.060270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.060363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.060391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.060488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.060517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.642 [2024-07-23 10:54:57.060613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.642 [2024-07-23 10:54:57.060640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.642 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.060728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.060757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.060841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.060868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.060950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.060977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.061058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.061084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.061166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.061193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.061279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.061305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.927 [2024-07-23 10:54:57.061385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.927 [2024-07-23 10:54:57.061417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.927 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.061514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.061540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.061653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.061681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.061773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.061801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.061898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.061928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.062948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.062975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.063915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.063942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.064922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.064947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.928 [2024-07-23 10:54:57.065876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.928 [2024-07-23 10:54:57.065901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.928 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.065987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.066903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.066985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.067922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.067949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.068922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.068948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.069898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.069944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.070044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.070071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.070165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.070191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.070278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.070305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.070398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.070425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.929 [2024-07-23 10:54:57.070509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.929 [2024-07-23 10:54:57.070542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.929 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.070668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.070694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.070782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.070809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.070892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.070918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.070997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.071883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.071997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.072899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.072993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.073926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.073952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.074894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.074923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.075012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.930 [2024-07-23 10:54:57.075041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.930 qpair failed and we were unable to recover it. 00:34:08.930 [2024-07-23 10:54:57.075137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.075947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.075975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.076882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.076910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.077928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.077954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.078887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.078979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.079104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.079218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.079353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.079471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.931 qpair failed and we were unable to recover it. 00:34:08.931 [2024-07-23 10:54:57.079601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.931 [2024-07-23 10:54:57.079628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.079721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.079747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.079830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.079857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.079948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.079974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.080907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.080988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.081907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.081934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.082895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.082922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.083023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.083051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.083133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.932 [2024-07-23 10:54:57.083158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.932 qpair failed and we were unable to recover it. 00:34:08.932 [2024-07-23 10:54:57.083249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.083877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.083984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.084908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.084937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.085893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.085981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.086886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.086988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.933 [2024-07-23 10:54:57.087888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.933 qpair failed and we were unable to recover it. 00:34:08.933 [2024-07-23 10:54:57.087978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.088924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.088950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.089869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.089984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.090913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.090938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.091949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.091977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.092069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.092097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.092182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.092209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.092334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.092363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.092452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.934 [2024-07-23 10:54:57.092478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.934 qpair failed and we were unable to recover it. 00:34:08.934 [2024-07-23 10:54:57.092579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.092606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.092686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.092713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.092808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.092834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.092922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.092949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.093903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.093944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.094957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.094986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.095925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.095952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.096921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.096946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.097036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.935 [2024-07-23 10:54:57.097064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.935 qpair failed and we were unable to recover it. 00:34:08.935 [2024-07-23 10:54:57.097155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.097910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.097994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.098936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.098964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.099876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.099973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.100933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.100963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.101046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.101073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.101152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.101178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.101279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.101305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.101403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.936 [2024-07-23 10:54:57.101429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.936 qpair failed and we were unable to recover it. 00:34:08.936 [2024-07-23 10:54:57.101510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.101536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.101635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.101664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.101801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.101855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.101943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.101972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.102924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.102952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.103900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.103927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.104963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.104993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.105124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.105179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.105280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.105308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.105389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.105415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.105510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.105538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.937 qpair failed and we were unable to recover it. 00:34:08.937 [2024-07-23 10:54:57.105628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.937 [2024-07-23 10:54:57.105655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.105742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.105767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.105847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.105874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.105953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.105979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.106908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.106934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.107932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.107958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.108902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.108992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.109910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.938 [2024-07-23 10:54:57.109941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.938 qpair failed and we were unable to recover it. 00:34:08.938 [2024-07-23 10:54:57.110022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.110965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.110992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.111970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.111999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.112937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.112964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.113937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.113963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.114046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.114072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.114161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.114189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.114279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.114306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.114406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.939 [2024-07-23 10:54:57.114447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.939 qpair failed and we were unable to recover it. 00:34:08.939 [2024-07-23 10:54:57.114587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.114629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.114759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.114787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.114873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.114908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.115890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.115917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.116954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.116980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.117913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.117997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.118898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.118924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.119013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.940 [2024-07-23 10:54:57.119040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.940 qpair failed and we were unable to recover it. 00:34:08.940 [2024-07-23 10:54:57.119120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.119954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.119984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.120953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.120979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.121891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.121919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.941 [2024-07-23 10:54:57.122639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.941 qpair failed and we were unable to recover it. 00:34:08.941 [2024-07-23 10:54:57.122725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.122752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.122849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.122876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.122955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.122981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.123894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.123924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.124893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.124983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.125861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.125960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.126928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.126954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.127036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.127062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.942 [2024-07-23 10:54:57.127143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.942 [2024-07-23 10:54:57.127169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.942 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.127950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.127976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.128893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.128919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.129947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.129973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.130895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.130975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.131083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.131196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.131318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.131433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.943 qpair failed and we were unable to recover it. 00:34:08.943 [2024-07-23 10:54:57.131562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.943 [2024-07-23 10:54:57.131590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.131675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.131701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.131791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.131818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.131902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.131929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.132932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.132959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.133964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.133993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.134903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.134982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.944 [2024-07-23 10:54:57.135806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.944 [2024-07-23 10:54:57.135832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.944 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.135922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.135951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.136964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.136993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.137897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.137997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.138948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.138974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.139963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.139988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.140086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.945 [2024-07-23 10:54:57.140116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.945 qpair failed and we were unable to recover it. 00:34:08.945 [2024-07-23 10:54:57.140203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.140911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.140938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.141856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.141882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.142936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.142964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.946 [2024-07-23 10:54:57.143806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.946 [2024-07-23 10:54:57.143833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.946 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.143915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.143941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.144972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.144998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.145884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.145982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.146910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.146937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.947 [2024-07-23 10:54:57.147768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.947 qpair failed and we were unable to recover it. 00:34:08.947 [2024-07-23 10:54:57.147883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.147909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.147999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.148938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.148964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.149868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.149893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.150929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.150957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.151875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.151973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.152002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.152091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.152120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.152243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.152270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.948 [2024-07-23 10:54:57.152355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.948 [2024-07-23 10:54:57.152383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.948 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.152468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.152501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.152593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.152620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.152742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.152770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.152898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.152927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.153961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.153987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.154964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.154994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.155922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.155949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.156033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.156059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.156145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.156172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.156264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.156291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.156375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.156401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.949 qpair failed and we were unable to recover it. 00:34:08.949 [2024-07-23 10:54:57.156490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.949 [2024-07-23 10:54:57.156517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.156597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.156624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.156718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.156747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.156833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.156862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.156949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.156976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.157969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.157996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.158948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.158974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.159922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.159948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.950 [2024-07-23 10:54:57.160701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.950 [2024-07-23 10:54:57.160727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.950 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.160817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.160844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.160929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.160958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.161955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.161981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.162899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.162927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.163932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.163959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.164098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.951 [2024-07-23 10:54:57.164125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.951 qpair failed and we were unable to recover it. 00:34:08.951 [2024-07-23 10:54:57.164208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.164909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.164935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.165954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.165980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.166957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.166983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.167901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.167930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.168027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.168053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.168142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.168169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.168264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.952 [2024-07-23 10:54:57.168290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.952 qpair failed and we were unable to recover it. 00:34:08.952 [2024-07-23 10:54:57.168372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.168400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.168487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.168514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.168598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.168624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.168758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.168789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.168868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.168894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.168978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.169935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.169961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.170939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.170965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.171907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.171935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.953 [2024-07-23 10:54:57.172735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.953 [2024-07-23 10:54:57.172761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.953 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.172844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.172871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.172957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.172986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.173891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.173979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.174916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.174943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.175960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.175985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.954 [2024-07-23 10:54:57.176882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.954 [2024-07-23 10:54:57.176910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.954 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.177898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.177985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.178933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.178959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.179895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.179985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.180904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.180987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.181014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.181104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.181131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.181222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.955 [2024-07-23 10:54:57.181250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.955 qpair failed and we were unable to recover it. 00:34:08.955 [2024-07-23 10:54:57.181334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.181361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.181454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.181487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.181613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.181642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.181735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.181762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.181855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.181881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.181977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.182898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.182927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.183873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.183901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.184901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.956 [2024-07-23 10:54:57.184927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.956 qpair failed and we were unable to recover it. 00:34:08.956 [2024-07-23 10:54:57.185043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.185864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.185890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.186968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.186993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.187967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.187993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.188886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.188912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.189006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.189035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.189124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.189151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.189235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.957 [2024-07-23 10:54:57.189261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.957 qpair failed and we were unable to recover it. 00:34:08.957 [2024-07-23 10:54:57.189351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.189472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.189602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.189715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.189833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.189958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.189984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.190896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.190994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.191911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.191938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.192898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.192924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.958 [2024-07-23 10:54:57.193778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.958 qpair failed and we were unable to recover it. 00:34:08.958 [2024-07-23 10:54:57.193919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.193969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.194944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.194973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.195895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.195983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.196909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.196992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.197899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.197926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.198017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.198044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.198132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.198165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.959 qpair failed and we were unable to recover it. 00:34:08.959 [2024-07-23 10:54:57.198258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.959 [2024-07-23 10:54:57.198286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.198974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.198999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.199914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.199943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.200941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.200967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.201960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.201986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.202071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.202097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.202176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.202203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.202281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.202307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.960 [2024-07-23 10:54:57.202400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.960 [2024-07-23 10:54:57.202429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.960 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.202525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.202553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.202641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.202668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.202755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.202781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.202874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.202902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.202991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.203935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.203962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.204950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.204977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.961 [2024-07-23 10:54:57.205849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.961 [2024-07-23 10:54:57.205878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.961 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.205970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.205996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.206898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.206980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.207928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.207955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.208879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.208906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.209923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.209949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.962 [2024-07-23 10:54:57.210049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.962 [2024-07-23 10:54:57.210090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.962 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.210899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.210976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.211915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.211998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.212940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.212968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.213955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.213982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.214073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.214099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.214180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.214207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.214289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.963 [2024-07-23 10:54:57.214315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.963 qpair failed and we were unable to recover it. 00:34:08.963 [2024-07-23 10:54:57.214407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.214434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.214532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.214560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.214652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.214679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.214769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.214795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.214879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.214905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.214986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.215946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.215972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.216913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.216996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.217955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.217982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.218066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.218093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.218173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.218199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.218286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.218314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.218412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.218441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.964 [2024-07-23 10:54:57.218534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.964 [2024-07-23 10:54:57.218562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.964 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.218675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.218701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.218785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.218812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.218901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.218927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.219955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.219983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.220897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.220987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.221913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.221941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.222030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.222056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.222152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.222179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.222262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.222289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.965 qpair failed and we were unable to recover it. 00:34:08.965 [2024-07-23 10:54:57.222380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.965 [2024-07-23 10:54:57.222405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.222518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.222546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.222637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.222666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.222759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.222787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.222874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.222900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.222978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.966 [2024-07-23 10:54:57.223902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.966 qpair failed and we were unable to recover it. 00:34:08.966 [2024-07-23 10:54:57.223989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.224897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.224985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.225918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.225944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.226866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.226983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.227912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.227999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.228027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.228114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.967 [2024-07-23 10:54:57.228139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.967 qpair failed and we were unable to recover it. 00:34:08.967 [2024-07-23 10:54:57.228229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.228924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.228952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.229880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.229992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.230937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.230962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.231914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.231939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.232052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.232160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.232300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.232411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.968 [2024-07-23 10:54:57.232594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.968 qpair failed and we were unable to recover it. 00:34:08.968 [2024-07-23 10:54:57.232708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.232736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.232834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.232861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.232947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.232973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.233905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.233997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.234966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.234992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.235938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.235981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.236946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.236974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.237064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.969 [2024-07-23 10:54:57.237091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.969 qpair failed and we were unable to recover it. 00:34:08.969 [2024-07-23 10:54:57.237177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.237892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.237990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.238932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.238959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.239881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.239970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.240957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.240983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.241122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.241148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.241265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.241290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.241403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.241429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.241590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.970 [2024-07-23 10:54:57.241644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.970 qpair failed and we were unable to recover it. 00:34:08.970 [2024-07-23 10:54:57.241725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.241751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.241837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.241863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.241950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.241977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.242959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.242985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.243919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.243946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.244849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.244875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.245008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.245033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.971 qpair failed and we were unable to recover it. 00:34:08.971 [2024-07-23 10:54:57.245118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.971 [2024-07-23 10:54:57.245143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.245992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.246925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.246952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.247902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.247928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.248931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.248957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.249048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.249076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.249171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.249200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.249284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.972 [2024-07-23 10:54:57.249310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.972 qpair failed and we were unable to recover it. 00:34:08.972 [2024-07-23 10:54:57.249397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.249425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.249557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.249585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.249681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.249708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.249798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.249825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.249909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.249935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.250919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.250964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.251902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.251928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.252889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.252975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.253952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.253994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.254085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.254112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.973 [2024-07-23 10:54:57.254237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.973 [2024-07-23 10:54:57.254278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:08.973 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.254387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.254416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.254522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.254563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.254777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.254834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.254935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.254962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.255942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.255968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.256898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.256998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.257930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.257957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.258959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.258985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.259075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.259104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.259190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.259218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.974 [2024-07-23 10:54:57.259309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.974 [2024-07-23 10:54:57.259336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.974 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.259426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.259453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.259547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.259574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.259661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.259688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.259776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.259803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.259899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.259925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.260927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.260954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.261949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.261976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.262904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.262930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.975 qpair failed and we were unable to recover it. 00:34:08.975 [2024-07-23 10:54:57.263759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.975 [2024-07-23 10:54:57.263808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.263898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.263924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.264944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.264970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.265935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.265963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.976 [2024-07-23 10:54:57.266751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.976 qpair failed and we were unable to recover it. 00:34:08.976 [2024-07-23 10:54:57.266838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.266864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.266943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.266968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.267898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.267924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.268905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.268989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.269948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.269975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.270061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.270089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.270181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.270207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.977 qpair failed and we were unable to recover it. 00:34:08.977 [2024-07-23 10:54:57.270301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.977 [2024-07-23 10:54:57.270330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.270436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.270548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.270665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.270790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.270896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.270977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.271914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.271941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.272892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.272976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.273002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.273095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.273122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.273216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.273245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.273332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.978 [2024-07-23 10:54:57.273360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.978 qpair failed and we were unable to recover it. 00:34:08.978 [2024-07-23 10:54:57.273510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.273559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.273647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.273673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.273757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.273782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.273872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.273904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.274891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.274917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.275934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.275961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.979 [2024-07-23 10:54:57.276820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.979 qpair failed and we were unable to recover it. 00:34:08.979 [2024-07-23 10:54:57.276941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.276985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.277915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.277942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.278961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.278988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.279902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.279927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.280016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.980 [2024-07-23 10:54:57.280042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.980 qpair failed and we were unable to recover it. 00:34:08.980 [2024-07-23 10:54:57.280131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.280944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.280970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.281885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.281975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.282899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.282925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.283017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.283043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.283131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.981 [2024-07-23 10:54:57.283158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.981 qpair failed and we were unable to recover it. 00:34:08.981 [2024-07-23 10:54:57.283247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.283405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.283519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.283638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.283752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.283928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.283973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.284902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.284929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.285012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.285038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.285128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.285157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.285253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.285290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.285381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.285409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.982 [2024-07-23 10:54:57.285507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.982 [2024-07-23 10:54:57.285535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.982 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.285634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.285660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.285747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.285773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.285862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.285889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.285978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.286931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.286958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.287886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.287912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.983 [2024-07-23 10:54:57.288720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.983 [2024-07-23 10:54:57.288747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.983 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.288837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.288866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.288963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.288989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.289954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.289980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.290970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.290996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.291910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.291942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.292024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.292051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.984 [2024-07-23 10:54:57.292143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.984 [2024-07-23 10:54:57.292170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.984 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.292911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.292937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.293908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.293936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.294902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.294932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.295012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.295037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.985 [2024-07-23 10:54:57.295124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.985 [2024-07-23 10:54:57.295154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.985 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.295915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.295941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.296939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.296967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.297950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.297979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.298059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.298086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.986 qpair failed and we were unable to recover it. 00:34:08.986 [2024-07-23 10:54:57.298176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.986 [2024-07-23 10:54:57.298204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.298954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.298979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.299963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.299992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.300893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.300984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.301012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.301094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.987 [2024-07-23 10:54:57.301121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.987 qpair failed and we were unable to recover it. 00:34:08.987 [2024-07-23 10:54:57.301212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.301899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.301927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.302920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.302946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.988 [2024-07-23 10:54:57.303965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.988 [2024-07-23 10:54:57.303991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.988 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.304914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.304940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.305967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.305993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.306109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.306219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.306323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.306437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.989 [2024-07-23 10:54:57.306555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.989 qpair failed and we were unable to recover it. 00:34:08.989 [2024-07-23 10:54:57.306634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.306661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.306747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.306774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.306863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.306890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.306980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.307906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.307932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.308959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.308987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.990 [2024-07-23 10:54:57.309625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.990 [2024-07-23 10:54:57.309652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.990 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.309736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.309763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.309844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.309870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.309958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.309984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.310973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.310999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.311933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.311959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.991 [2024-07-23 10:54:57.312636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.991 qpair failed and we were unable to recover it. 00:34:08.991 [2024-07-23 10:54:57.312719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.312748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.312841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.312869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.312951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.312978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.313940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.313968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.314964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.314992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.992 [2024-07-23 10:54:57.315644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.992 qpair failed and we were unable to recover it. 00:34:08.992 [2024-07-23 10:54:57.315728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.315754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.315841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.315867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.315944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.315969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.316961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.316989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.317907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.317933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.993 [2024-07-23 10:54:57.318717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.993 [2024-07-23 10:54:57.318743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.993 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.318832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.318859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.318952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.318977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.319955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.319980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.320896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.320984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.994 qpair failed and we were unable to recover it. 00:34:08.994 [2024-07-23 10:54:57.321928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.994 [2024-07-23 10:54:57.321992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.322873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.322898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.323934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.323959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.324060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.324085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.324166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.324194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.324285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.324312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.995 [2024-07-23 10:54:57.324402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.995 [2024-07-23 10:54:57.324431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.995 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.324568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.324608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.324709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.324739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.324858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.324885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.324970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.324997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.325905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.325932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.326899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.326925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.996 qpair failed and we were unable to recover it. 00:34:08.996 [2024-07-23 10:54:57.327710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.996 [2024-07-23 10:54:57.327737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.327831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.327857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.327937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.327966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.328891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.328977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.329911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.329937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.330931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.997 [2024-07-23 10:54:57.330957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.997 qpair failed and we were unable to recover it. 00:34:08.997 [2024-07-23 10:54:57.331044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.331892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.331976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.332845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.332960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.333030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.333179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.333237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.333379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.333434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.333537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.333565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.333657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.333681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.334498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.334536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.334662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.334690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.334781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.334806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.334919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.334945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.335027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.335056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.998 [2024-07-23 10:54:57.335136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.998 [2024-07-23 10:54:57.335162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.998 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.335249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.335275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.335364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.335397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.335492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.335532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.335640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.335667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.335880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.336916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.336943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.337916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.337997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.338113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.338223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.338345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.338464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:08.999 qpair failed and we were unable to recover it. 00:34:08.999 [2024-07-23 10:54:57.338626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.999 [2024-07-23 10:54:57.338654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.338742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.338767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.338848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.338873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.338950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.338976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.339922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.339962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.340943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.340970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.000 [2024-07-23 10:54:57.341772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.000 qpair failed and we were unable to recover it. 00:34:09.000 [2024-07-23 10:54:57.341868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.341902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.341991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.342926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.342969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.343916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.343944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.344919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.344950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.345050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.345081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.001 [2024-07-23 10:54:57.345170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.001 [2024-07-23 10:54:57.345198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.001 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.345943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.345969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.346884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.346974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.002 [2024-07-23 10:54:57.347796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.002 [2024-07-23 10:54:57.347823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.002 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.347918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.347946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.348971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.348998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.349859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.349885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.350878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.350908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.351015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.351042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.351128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.351154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.351238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.351264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.351382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.351409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.003 qpair failed and we were unable to recover it. 00:34:09.003 [2024-07-23 10:54:57.351511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.003 [2024-07-23 10:54:57.351539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.351678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.351708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.351808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.351836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.351930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.351957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.352943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.352970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.353954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.353981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.004 [2024-07-23 10:54:57.354822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.004 qpair failed and we were unable to recover it. 00:34:09.004 [2024-07-23 10:54:57.354930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.354957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.355858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.355976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.356861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.356889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.357888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.357914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.358010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.358040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.358149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.358186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.005 [2024-07-23 10:54:57.358317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.005 [2024-07-23 10:54:57.358344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.005 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.358429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.358454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.358560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.358586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.358669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.358695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.358786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.358813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.358963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.358990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.359864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.359891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.360885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.360912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.006 qpair failed and we were unable to recover it. 00:34:09.006 [2024-07-23 10:54:57.361758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-23 10:54:57.361786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.361901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.361928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.362895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.362993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.363917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.363944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.364880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.364907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.365006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.365034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.365114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-23 10:54:57.365140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.007 qpair failed and we were unable to recover it. 00:34:09.007 [2024-07-23 10:54:57.365237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.365355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.365516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.365662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.365789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.365907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.365933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.366901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.366937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-23 10:54:57.367683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.008 qpair failed and we were unable to recover it. 00:34:09.008 [2024-07-23 10:54:57.367784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.367812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.367892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.367919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.368960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.368987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.369898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.369927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.370917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.370944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.371051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.009 [2024-07-23 10:54:57.371078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-23 10:54:57.371166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.371923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.371949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.372879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.372912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.373892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.373919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.374035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.374069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.374178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.374205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.374285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.374311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.374397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.010 [2024-07-23 10:54:57.374423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-23 10:54:57.374529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.374556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.374644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.374670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.374764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.374790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.374877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.374902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.374985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.375899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.375924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.376887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.376914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.011 [2024-07-23 10:54:57.377689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-23 10:54:57.377803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.377847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.377928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.377954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.378886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.378987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.379909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.379935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.012 [2024-07-23 10:54:57.380851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.012 [2024-07-23 10:54:57.380879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.012 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.380991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.381941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.381966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.382894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.382988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.383937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.383963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.384070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.384103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.013 [2024-07-23 10:54:57.384196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.013 [2024-07-23 10:54:57.384228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.013 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.384952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.384978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.385908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.385936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.386967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.386994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.387075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.387101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.387303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.387332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.387431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.387461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.014 qpair failed and we were unable to recover it. 00:34:09.014 [2024-07-23 10:54:57.387570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.014 [2024-07-23 10:54:57.387604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.387712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.387744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.387859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.387887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.387980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.388944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.388971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.389908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.389934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.015 [2024-07-23 10:54:57.390031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.015 [2024-07-23 10:54:57.390071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.015 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.390909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.390941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.391892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.391983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.392858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.392885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.393009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.393050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.393137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.393165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.393249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.393275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.016 [2024-07-23 10:54:57.393356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.016 [2024-07-23 10:54:57.393382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.016 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.393476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.393510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.393607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.393636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.393765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.393792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.393910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.393936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.394892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.394991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.395926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.395957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.017 [2024-07-23 10:54:57.396683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.017 qpair failed and we were unable to recover it. 00:34:09.017 [2024-07-23 10:54:57.396775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.396804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.396907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.396935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.397962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.397988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.398919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.398945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.399881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.399908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.400008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.018 [2024-07-23 10:54:57.400035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.018 qpair failed and we were unable to recover it. 00:34:09.018 [2024-07-23 10:54:57.400118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.400947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.400976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.401844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.401870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.402889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.402917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.019 [2024-07-23 10:54:57.403640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.019 qpair failed and we were unable to recover it. 00:34:09.019 [2024-07-23 10:54:57.403745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.403772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.403866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.403897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.403993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.020 [2024-07-23 10:54:57.404747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.020 [2024-07-23 10:54:57.404775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.020 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.404856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.404883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.404972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.405970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.405997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.406909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.406935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.301 [2024-07-23 10:54:57.407784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.301 qpair failed and we were unable to recover it. 00:34:09.301 [2024-07-23 10:54:57.407893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.407920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.408905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.408932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.409030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.409056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.409147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.409175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.409297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.409338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.409450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.409484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.410922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.410948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.411920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.411946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.412884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.412910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.413021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.302 [2024-07-23 10:54:57.413056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.302 qpair failed and we were unable to recover it. 00:34:09.302 [2024-07-23 10:54:57.413150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.413898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.413925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.414884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.414987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.415967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.415999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.416889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.416992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.417019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.417111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.417140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.417233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.417263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.417355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.417381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.303 [2024-07-23 10:54:57.417477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.303 [2024-07-23 10:54:57.417512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.303 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.417593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.417619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.417706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.417733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.417846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.417873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.417963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.417989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.418955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.418981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.419883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.419980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.420897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.420924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.421938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.421964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.422065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.422091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.304 [2024-07-23 10:54:57.422174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.304 [2024-07-23 10:54:57.422200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.304 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.422927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.422953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.423905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.423996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.424958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.424987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.425957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.425984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.426090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.426117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.426226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.426259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.426359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.426388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.305 qpair failed and we were unable to recover it. 00:34:09.305 [2024-07-23 10:54:57.426492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.305 [2024-07-23 10:54:57.426529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.426615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.426641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.426729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.426755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.426836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.426861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.426945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.426971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.427902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.427928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.428951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.428977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.429948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.429975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.430084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.430110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.430201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.430230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.430332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.430359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.306 [2024-07-23 10:54:57.430447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.306 [2024-07-23 10:54:57.430476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.306 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.430597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.430625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.430725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.430752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.430838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.430864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.430946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.430971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.431919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.431946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.432885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.432912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.433960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.433990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.307 qpair failed and we were unable to recover it. 00:34:09.307 [2024-07-23 10:54:57.434807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.307 [2024-07-23 10:54:57.434834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.434924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.434951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.435926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.435955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.436917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.436944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.437899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.437926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.438967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.438994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.439092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.439118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.308 qpair failed and we were unable to recover it. 00:34:09.308 [2024-07-23 10:54:57.439203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.308 [2024-07-23 10:54:57.439228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.439947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.439975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.440906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.440984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.441901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.441995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.442929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.442954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.443067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.443181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.443301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.443413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.309 [2024-07-23 10:54:57.443534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.309 qpair failed and we were unable to recover it. 00:34:09.309 [2024-07-23 10:54:57.443639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.443665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.443777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.443805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.443888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.443914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.444927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.444954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.445905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.445999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.446894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.446976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.447003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.447088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.310 [2024-07-23 10:54:57.447114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.310 qpair failed and we were unable to recover it. 00:34:09.310 [2024-07-23 10:54:57.447207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.447931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.447957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.448872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.448898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.449880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.449988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.450878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.450905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.451003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.451033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.311 [2024-07-23 10:54:57.451129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.311 [2024-07-23 10:54:57.451158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.311 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.451856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.451977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.452884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.452910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.453962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.453994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.454921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.454950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.312 [2024-07-23 10:54:57.455670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.312 qpair failed and we were unable to recover it. 00:34:09.312 [2024-07-23 10:54:57.455769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.455794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.455887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.455915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.456951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.456978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.457938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.457964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.458903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.458999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.313 qpair failed and we were unable to recover it. 00:34:09.313 [2024-07-23 10:54:57.459841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.313 [2024-07-23 10:54:57.459867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.459963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.459988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.460906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.460934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.461966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.461993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.462905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.462990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.463897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.463987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.464013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.464105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.464131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.464221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.464249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.314 [2024-07-23 10:54:57.464332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.314 [2024-07-23 10:54:57.464359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.314 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.464449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.464476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.464570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.464596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.464680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.464706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.464787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.464812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.464903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.464932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.465886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.465981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.466940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.466965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.467969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.467999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.315 qpair failed and we were unable to recover it. 00:34:09.315 [2024-07-23 10:54:57.468085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.315 [2024-07-23 10:54:57.468112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.468915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.468941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.469953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.469980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.470895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.470920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.471899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.471990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.472107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.472228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.472340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.472449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.316 qpair failed and we were unable to recover it. 00:34:09.316 [2024-07-23 10:54:57.472578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.316 [2024-07-23 10:54:57.472604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.472696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.472723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.472824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.472851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.472934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.472960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.473906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.473997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.474876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.474992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.475934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.475962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.476953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.476978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.477064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.477090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.317 qpair failed and we were unable to recover it. 00:34:09.317 [2024-07-23 10:54:57.477182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.317 [2024-07-23 10:54:57.477208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.477914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.477941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.478893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.478919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.479953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.479980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.480918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.480943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.318 [2024-07-23 10:54:57.481686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.318 qpair failed and we were unable to recover it. 00:34:09.318 [2024-07-23 10:54:57.481777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.481802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.481892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.481922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.482950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.482979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.483908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.483935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.484966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.484992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.319 [2024-07-23 10:54:57.485837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.319 qpair failed and we were unable to recover it. 00:34:09.319 [2024-07-23 10:54:57.485927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.485953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.486948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.486974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.487901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.487986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.488940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.488968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.489059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.489086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.489182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.489214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.489311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.489338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.489435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.489464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.320 [2024-07-23 10:54:57.489569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.320 [2024-07-23 10:54:57.489598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.320 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.489683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.489710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.489804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.489832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.489915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.489942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.490972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.490997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.491901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.491926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.492970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.492999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.321 qpair failed and we were unable to recover it. 00:34:09.321 [2024-07-23 10:54:57.493712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.321 [2024-07-23 10:54:57.493739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.493819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.493845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.493940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.493967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.494929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.494954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.495904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.495994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.496958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.496985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.497904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.497930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.498046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.498072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.498155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.322 [2024-07-23 10:54:57.498180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.322 qpair failed and we were unable to recover it. 00:34:09.322 [2024-07-23 10:54:57.498263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.498374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.498510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.498653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.498760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.498907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.498933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.499916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.499944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.500900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.500991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.501916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.501943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.502035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.502061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.502155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.502181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.502265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.502293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.323 [2024-07-23 10:54:57.502374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.323 [2024-07-23 10:54:57.502399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.323 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.502492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.502518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.502601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.502626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.502726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.502751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.502849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.502876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.502964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.502989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.503909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.503992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.504934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.504961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.505916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.505942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.324 [2024-07-23 10:54:57.506734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.324 qpair failed and we were unable to recover it. 00:34:09.324 [2024-07-23 10:54:57.506830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.506859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.506939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.506965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.507896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.507984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.508912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.508938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.509938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.509968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.510062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.510092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.325 qpair failed and we were unable to recover it. 00:34:09.325 [2024-07-23 10:54:57.510191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.325 [2024-07-23 10:54:57.510218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.510894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.510921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.511953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.511980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.512897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.512925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.513957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.513984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.514083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.514108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.514192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.514219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.514312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.514338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.514420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.514446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.326 [2024-07-23 10:54:57.514550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.326 [2024-07-23 10:54:57.514577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.326 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.514665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.514691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.514782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.514809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.514897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.514923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.515958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.515984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.516899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.516985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.517935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.517963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.518894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.327 qpair failed and we were unable to recover it. 00:34:09.327 [2024-07-23 10:54:57.518981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.327 [2024-07-23 10:54:57.519008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.519911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.519938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.520956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.520988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.521912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.521939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.522879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.522976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.523004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.523092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.523118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.523213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.523241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.523330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.328 [2024-07-23 10:54:57.523358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.328 qpair failed and we were unable to recover it. 00:34:09.328 [2024-07-23 10:54:57.523445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.523472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.523578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.523613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.523697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.523724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.523842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.523869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.523959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.523985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.524936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.524961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.525893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.525922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.526969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.526994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.329 [2024-07-23 10:54:57.527662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.329 [2024-07-23 10:54:57.527688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.329 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.527776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.527801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.527884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.527910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.528958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.528984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.529939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.529965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.530902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.530928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.330 [2024-07-23 10:54:57.531745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.330 [2024-07-23 10:54:57.531773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.330 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.531866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.531893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.531981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.532946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.532974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.533893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.533919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.534955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.534982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.535896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.535922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.536011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.536038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.536118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.331 [2024-07-23 10:54:57.536143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.331 qpair failed and we were unable to recover it. 00:34:09.331 [2024-07-23 10:54:57.536225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.536930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.536960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.537899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.537924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.538891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.538984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.539952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.539987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.332 [2024-07-23 10:54:57.540691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.332 [2024-07-23 10:54:57.540717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.332 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.540799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.540824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.540909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.540934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.541969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.541994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.542904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.542929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.543915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.543941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.333 [2024-07-23 10:54:57.544960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.333 [2024-07-23 10:54:57.544985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.333 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.545942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.545969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.546905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.546931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.334 qpair failed and we were unable to recover it. 00:34:09.334 [2024-07-23 10:54:57.547950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.334 [2024-07-23 10:54:57.547977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.548944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.548970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.549906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.549937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.550021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.550047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.550142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.550167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.550250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.550275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.550364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.550389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.574991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.575038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.575159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.575187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.575293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.575319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.575430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.575458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.575623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.335 [2024-07-23 10:54:57.575662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.335 qpair failed and we were unable to recover it. 00:34:09.335 [2024-07-23 10:54:57.575831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.575871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.576019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.576046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.576189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.576214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.576367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.576392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.576548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.576574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.576706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.576731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.591497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.591537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.591715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.591757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.591916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.591956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.592093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.592121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.592250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.592277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.592425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.592451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.592614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.592655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.593600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.593629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.593765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.593790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.593915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.593942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.594923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.594949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.595937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.595962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.596901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.596925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.597022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.336 [2024-07-23 10:54:57.597047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.336 qpair failed and we were unable to recover it. 00:34:09.336 [2024-07-23 10:54:57.597134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.597867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.597980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.598963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.598989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.599888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.599918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.600897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.600995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.601917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.601943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.602050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.602075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.602183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.337 [2024-07-23 10:54:57.602208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.337 qpair failed and we were unable to recover it. 00:34:09.337 [2024-07-23 10:54:57.602322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.602439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.602580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.602731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.602843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.602962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.602990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.603906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.603932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.604963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.604988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.605883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.605990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.606931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.606957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.607051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.338 [2024-07-23 10:54:57.607077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.338 qpair failed and we were unable to recover it. 00:34:09.338 [2024-07-23 10:54:57.607159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.607941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.607968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.608896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.608989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.609016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.609119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.609145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.609234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.339 [2024-07-23 10:54:57.609265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.339 qpair failed and we were unable to recover it. 00:34:09.339 [2024-07-23 10:54:57.609382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.118700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.118999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.119202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.119385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.119576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.119754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.119878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.119905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.120955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.120980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.121937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.121964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.122058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.122083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.122185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.122217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.122315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.951 [2024-07-23 10:54:58.122342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.951 qpair failed and we were unable to recover it. 00:34:09.951 [2024-07-23 10:54:58.122467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.122507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.122632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.122659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.122764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.122790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.122878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.122905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.123869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.123992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.124898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.124990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.125939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.125971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.126884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.126912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.127026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.127052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.127142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.127168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.952 [2024-07-23 10:54:58.127290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.952 [2024-07-23 10:54:58.127318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.952 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.127440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.127468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.127630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.127674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.127809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.127840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.127948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.127976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.128904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.128931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.129973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.129999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.130923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.130949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.131968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.131994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.132131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.953 [2024-07-23 10:54:58.132174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.953 qpair failed and we were unable to recover it. 00:34:09.953 [2024-07-23 10:54:58.132290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.132429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.132569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.132683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.132804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.132982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.133879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.133906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.134895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.134996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.135882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.135911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.136063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.136186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.136342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.136457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.954 [2024-07-23 10:54:58.136596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.954 qpair failed and we were unable to recover it. 00:34:09.954 [2024-07-23 10:54:58.136715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.136744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.136866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.136895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.137959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.137986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.138893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.138982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.139902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.139990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.140910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.140941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.141066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.141095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.955 qpair failed and we were unable to recover it. 00:34:09.955 [2024-07-23 10:54:58.141183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.955 [2024-07-23 10:54:58.141212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.141948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.141977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.142897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.142935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.143874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.143996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.144119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.144256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.144404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.144524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.956 [2024-07-23 10:54:58.144673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.956 [2024-07-23 10:54:58.144701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.956 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.144799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.144827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.144918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.144951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.145963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.145991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.146948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.146977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.147868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.147974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.148944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.148971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.149096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.149188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.149217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.149305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.149331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.957 qpair failed and we were unable to recover it. 00:34:09.957 [2024-07-23 10:54:58.149414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.957 [2024-07-23 10:54:58.149439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.149566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.149594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.149687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.149714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.149800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.149836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.149925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.149951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.150901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.150991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.151896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.151933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.152875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.152912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.153888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.153977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.154005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.154097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.958 [2024-07-23 10:54:58.154134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.958 qpair failed and we were unable to recover it. 00:34:09.958 [2024-07-23 10:54:58.154286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.154313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.154401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.154430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.154473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8e320 (9): Bad file descriptor 00:34:09.959 [2024-07-23 10:54:58.154589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.154626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.154728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.154765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.154856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.154885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.154976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.155893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.155976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.156964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.156990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.157904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.157987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.959 [2024-07-23 10:54:58.158713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.959 qpair failed and we were unable to recover it. 00:34:09.959 [2024-07-23 10:54:58.158834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.158864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.158961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.158991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.159901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.159927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.160908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.160994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.161962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.161994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.162967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.162995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.163081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.960 [2024-07-23 10:54:58.163110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.960 qpair failed and we were unable to recover it. 00:34:09.960 [2024-07-23 10:54:58.163200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.163960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.163987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.164928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.164956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.165877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.165968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.166005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.166090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.166123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.166212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.166239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.166328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.166355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.166446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.961 [2024-07-23 10:54:58.166475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.961 qpair failed and we were unable to recover it. 00:34:09.961 [2024-07-23 10:54:58.166617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.166644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.166738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.166766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.166856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.166884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.166967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.166995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.167883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.167911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.168916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.168943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.169933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.169959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.962 [2024-07-23 10:54:58.170801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.962 [2024-07-23 10:54:58.170831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.962 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.170960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.170989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.171945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.171971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.172879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.172907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.173904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.173994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.174949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.174976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.175068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.963 [2024-07-23 10:54:58.175094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.963 qpair failed and we were unable to recover it. 00:34:09.963 [2024-07-23 10:54:58.175179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.175928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.175959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.176962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.176988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.177935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.177963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.178892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.178983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.179014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.179109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.179136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.179226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.964 [2024-07-23 10:54:58.179254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.964 qpair failed and we were unable to recover it. 00:34:09.964 [2024-07-23 10:54:58.179340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.179373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.179508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.179537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.179631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.179656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.179742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.179770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.179857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.179885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.180954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.180983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.181901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.181994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.182871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.182987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.183131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.183254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.183370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.183502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.965 qpair failed and we were unable to recover it. 00:34:09.965 [2024-07-23 10:54:58.183614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.965 [2024-07-23 10:54:58.183641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.183731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.183769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.183894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.183922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.184960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.184987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.185960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.185996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.186878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.186909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.187003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.187032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.187125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.966 [2024-07-23 10:54:58.187153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.966 qpair failed and we were unable to recover it. 00:34:09.966 [2024-07-23 10:54:58.187247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.187959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.187986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.188922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.188949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.189903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.190956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.190986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.191097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.191128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.191222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.967 [2024-07-23 10:54:58.191249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.967 qpair failed and we were unable to recover it. 00:34:09.967 [2024-07-23 10:54:58.191341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.191461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.191594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.191701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.191810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.191919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.191946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.192931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.192958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.193947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.193975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.194923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.194951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.968 [2024-07-23 10:54:58.195779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.968 [2024-07-23 10:54:58.195806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.968 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.195892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.195922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.196911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.196998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.197919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.197946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.198861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.198888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.199926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.199954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.200048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.200076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.969 [2024-07-23 10:54:58.200167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.969 [2024-07-23 10:54:58.200197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.969 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.200916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.200997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.201896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.201922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.202886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.202975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.203907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.203994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.204111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.204354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.204491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.970 [2024-07-23 10:54:58.204608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.970 [2024-07-23 10:54:58.204634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.970 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.204723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.204751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.204840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.204866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.204956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.204986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.205947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.205976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.206937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.206965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.207878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.207907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.208003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.208031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.208139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.971 [2024-07-23 10:54:58.208166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.971 qpair failed and we were unable to recover it. 00:34:09.971 [2024-07-23 10:54:58.208258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.208378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.208496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.208612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.208737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.208864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.208893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.209942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.210892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.210926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.211897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.211989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.212019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.212109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.212137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.212222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.212250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.212330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.212357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.212440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.972 [2024-07-23 10:54:58.212467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.972 qpair failed and we were unable to recover it. 00:34:09.972 [2024-07-23 10:54:58.212566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.212595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.212682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.212711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.212800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.212828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.212918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.212947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.213942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.213970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.214952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.214979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.215901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.215927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.973 qpair failed and we were unable to recover it. 00:34:09.973 [2024-07-23 10:54:58.216581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.973 [2024-07-23 10:54:58.216609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.216694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.216721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.216809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.216837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.216926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.216954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.217954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.217981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.218901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.218927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.219884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.219971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.974 [2024-07-23 10:54:58.220786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.974 qpair failed and we were unable to recover it. 00:34:09.974 [2024-07-23 10:54:58.220880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.220907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.221964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.221990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.222937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.222963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.223895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.223925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.224956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.224985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.225083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.975 [2024-07-23 10:54:58.225113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.975 qpair failed and we were unable to recover it. 00:34:09.975 [2024-07-23 10:54:58.225201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.225910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.225937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.226910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.226938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.227959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.227986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.976 [2024-07-23 10:54:58.228684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.976 qpair failed and we were unable to recover it. 00:34:09.976 [2024-07-23 10:54:58.228775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.228804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.228894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.228922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.229860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.229886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.230882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.230967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.231899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.231926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.232893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.232923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.233046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.233073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.977 qpair failed and we were unable to recover it. 00:34:09.977 [2024-07-23 10:54:58.233161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.977 [2024-07-23 10:54:58.233187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.233891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.233918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.234884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.234976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.235963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.235991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.236956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.236994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.237089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.237119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.237234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.237264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.237385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.978 [2024-07-23 10:54:58.237416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.978 qpair failed and we were unable to recover it. 00:34:09.978 [2024-07-23 10:54:58.237516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.237544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.237642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.237668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.237755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.237789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.237875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.237900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.238934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.238960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.239906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.239935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.240884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.240920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.979 [2024-07-23 10:54:58.241871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.979 [2024-07-23 10:54:58.241906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.979 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.242929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.242955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.243895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.243922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.244888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.244920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.245891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.245981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.246018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.246108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.246136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.980 qpair failed and we were unable to recover it. 00:34:09.980 [2024-07-23 10:54:58.246234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.980 [2024-07-23 10:54:58.246263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.246950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.246977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.247933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.247970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.248913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.248942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.981 qpair failed and we were unable to recover it. 00:34:09.981 [2024-07-23 10:54:58.249753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.981 [2024-07-23 10:54:58.249780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.249871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.249900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.249984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.250968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.250995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.251882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.251910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.252895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.252978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.253910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.982 qpair failed and we were unable to recover it. 00:34:09.982 [2024-07-23 10:54:58.253998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.982 [2024-07-23 10:54:58.254027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.254876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.254912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.255965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.255995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.256911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.256999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.257930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.257958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.258048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.258078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.258202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.258230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.983 [2024-07-23 10:54:58.258320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.983 [2024-07-23 10:54:58.258349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.983 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.258438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.258466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.258562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.258592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.258679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.258715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.258839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.258874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.258971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.259929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.259955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.260858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.260884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.261913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.261942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.984 [2024-07-23 10:54:58.262701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.984 qpair failed and we were unable to recover it. 00:34:09.984 [2024-07-23 10:54:58.262790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.262817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.262917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.262943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.263880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.263909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.264887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.264969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.265895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.265931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.266935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.985 [2024-07-23 10:54:58.266964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.985 qpair failed and we were unable to recover it. 00:34:09.985 [2024-07-23 10:54:58.267057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.267943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.267976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.268917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.268948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.269894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.269921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.270008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.270041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.270138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.270165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.270252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.270289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.270375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.986 [2024-07-23 10:54:58.270401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.986 qpair failed and we were unable to recover it. 00:34:09.986 [2024-07-23 10:54:58.270489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.270517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.270605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.270634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.270723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.270749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.270866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.270893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.270990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.271896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.271981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.272876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.272912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.273893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.273919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.987 qpair failed and we were unable to recover it. 00:34:09.987 [2024-07-23 10:54:58.274778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.987 [2024-07-23 10:54:58.274803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.274880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.274907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.274991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.275884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.275965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.276888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.276999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.277867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.277894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.278881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.278915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.279011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.988 [2024-07-23 10:54:58.279040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.988 qpair failed and we were unable to recover it. 00:34:09.988 [2024-07-23 10:54:58.279170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.279891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.279979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.280897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.280928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.281948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.281976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.282945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.282980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.283109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.283259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.283379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.283521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.989 [2024-07-23 10:54:58.283642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.989 qpair failed and we were unable to recover it. 00:34:09.989 [2024-07-23 10:54:58.283728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.283756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.283857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.283884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.283997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.284968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.284997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.285933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.285966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.286930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.286959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.287937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.287974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.990 qpair failed and we were unable to recover it. 00:34:09.990 [2024-07-23 10:54:58.288062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.990 [2024-07-23 10:54:58.288090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.288941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.288977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.289938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.289968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.290937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.290964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.291056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.291084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.291171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.291206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.291301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.291329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.291446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.291474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.991 qpair failed and we were unable to recover it. 00:34:09.991 [2024-07-23 10:54:58.291581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.991 [2024-07-23 10:54:58.291610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.291717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.291745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.291831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.291858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.291946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.291984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.292928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.292968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.293939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.293964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.294924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.294951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.295904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.295940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.992 [2024-07-23 10:54:58.296036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.992 [2024-07-23 10:54:58.296063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.992 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.296878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.296916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.297917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.297948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.298907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.298934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.299963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.299989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.993 [2024-07-23 10:54:58.300845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.993 qpair failed and we were unable to recover it. 00:34:09.993 [2024-07-23 10:54:58.300927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.300956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.301892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.301923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.302967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.302994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.303915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.303946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.304943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.304971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.305078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.305107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.305197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.305227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.305322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.305351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.305436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.994 [2024-07-23 10:54:58.305470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.994 qpair failed and we were unable to recover it. 00:34:09.994 [2024-07-23 10:54:58.305580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.305610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.305697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.305725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.305812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.305844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.305940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.305968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.306945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.306980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.307964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.307991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.308948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.308975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.309908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.309935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.310019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.310057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.995 qpair failed and we were unable to recover it. 00:34:09.995 [2024-07-23 10:54:58.310152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.995 [2024-07-23 10:54:58.310179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.310910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.310998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.311877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.311975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.312933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.312959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.996 [2024-07-23 10:54:58.313822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.996 [2024-07-23 10:54:58.313849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.996 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.313942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.313977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.314931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.314959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.315899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.315935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.316900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.316930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.317897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.317999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.997 [2024-07-23 10:54:58.318754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.997 qpair failed and we were unable to recover it. 00:34:09.997 [2024-07-23 10:54:58.318846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.318880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.318965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.319937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.319963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.320945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.320976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.321906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.321936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.322907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.322933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.998 [2024-07-23 10:54:58.323682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.998 qpair failed and we were unable to recover it. 00:34:09.998 [2024-07-23 10:54:58.323770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.323802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.323884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.323920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.324957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.324983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.325911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.325946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.326941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.326968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.327938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.327966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:09.999 [2024-07-23 10:54:58.328663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.999 [2024-07-23 10:54:58.328691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:09.999 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.328803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.328832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.328943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.328983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.329918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.329945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.330914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.330941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.331944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.331972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.332942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.332970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.333059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.000 [2024-07-23 10:54:58.333088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.000 qpair failed and we were unable to recover it. 00:34:10.000 [2024-07-23 10:54:58.333173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.333905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.333940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.334882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.334919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.335877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.335914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.336011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.336040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.336132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.001 [2024-07-23 10:54:58.336165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.001 qpair failed and we were unable to recover it. 00:34:10.001 [2024-07-23 10:54:58.336259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.336899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.336995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.337958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.337985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.338929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.338985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.339807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.339945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.340003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.340195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.340249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.340367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.340402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.340501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.340532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.002 [2024-07-23 10:54:58.340613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.002 [2024-07-23 10:54:58.340640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.002 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.340731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.340764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.340897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.340949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.341829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.341996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.342842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.342959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.343900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.343966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.344102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.344168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.344313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.344370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.344570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.344632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.344827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.344894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.345038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.345105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.345341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.345402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.003 qpair failed and we were unable to recover it. 00:34:10.003 [2024-07-23 10:54:58.345574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.003 [2024-07-23 10:54:58.345642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.345737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.345763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.345882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.345929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.346843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.346962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.347019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.347133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.347195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.347311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.347372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.347512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.347569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.347721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.347782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.347978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.348895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.348977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.349899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.349950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.350866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.004 qpair failed and we were unable to recover it. 00:34:10.004 [2024-07-23 10:54:58.350989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.004 [2024-07-23 10:54:58.351036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.351154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.351209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.351346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.351411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.351562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.351612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.351769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.351833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.351974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.352218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.352429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.352555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.352683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.352858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.352916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.353956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.353991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.005 qpair failed and we were unable to recover it. 00:34:10.005 [2024-07-23 10:54:58.354933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.005 [2024-07-23 10:54:58.354961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.355950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.355976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.356061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.356123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.356291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.356351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.356497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.356565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.356703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.356773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.356972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.357957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.357983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.358923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.006 [2024-07-23 10:54:58.358955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.006 qpair failed and we were unable to recover it. 00:34:10.006 [2024-07-23 10:54:58.359066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.359843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.359960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.360026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.360187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.360252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.360420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.360499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.360658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.360718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.360865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.360934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.361948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.361981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.362965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.362991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.363168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.363279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.363393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.363604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.363779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.363937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.364010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.364173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.364231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.364380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.364448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.364605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.364675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.364835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.364894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.007 [2024-07-23 10:54:58.365052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.007 [2024-07-23 10:54:58.365114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.007 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.365251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.365323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.365465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.365500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.365640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.365703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.365823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.365868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.365997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.366897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.366935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.367114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.367303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.367476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.367679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.367865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.367967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.368901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.368999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.369114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.369369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.369607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.369727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.369876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.369924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.370037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.370146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.370326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.370567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.370787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.370952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.371018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.371187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.371250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.371400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.008 [2024-07-23 10:54:58.371514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.008 qpair failed and we were unable to recover it. 00:34:10.008 [2024-07-23 10:54:58.371680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.371707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.371848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.371906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.372059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.372088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.372263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.372290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.372502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.372530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.372688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.372763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.372951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.373010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.373159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.373234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.373398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.373461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.373642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.373702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.373896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.373953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.374885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.374984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.375046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.375137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.375162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.375299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.375329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.375490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.375557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.375726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.375786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.375928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.376004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.376143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.376208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.376375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.376439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.376541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.376569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.376707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.376786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.376951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.377835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.377949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.378010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.378129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.378183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.378314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.378360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.378444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.378469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.378626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.009 [2024-07-23 10:54:58.378672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.009 qpair failed and we were unable to recover it. 00:34:10.009 [2024-07-23 10:54:58.378836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.378894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.379130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.379393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.379583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.379700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.379868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.379970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.380157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.380302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.380478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.380684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.380860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.380917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.381086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.381150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.381377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.381562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.381616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.381756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.381798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.381920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.381976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.382910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.382936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.383947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.383991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.384891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.384918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.385014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.010 [2024-07-23 10:54:58.385042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.010 qpair failed and we were unable to recover it. 00:34:10.010 [2024-07-23 10:54:58.385122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.385149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.385259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.385317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.385400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.385425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.385556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.385623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.385732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.385779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.385965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.386028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.386174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.386241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.386421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.386503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.386655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.386709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.386849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.386917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.387055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.387126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.387275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.387327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.387474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.387550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.387707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.387769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.387916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.387987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.388897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.388924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.389960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.389985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.390065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.390090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.011 [2024-07-23 10:54:58.390206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.011 [2024-07-23 10:54:58.390269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.011 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.390366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.390400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.390499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.390552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.390781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.390833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.390919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.390946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.391870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.391924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.392872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.392923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.393915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.393939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.394936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.394995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.395905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.395990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.012 [2024-07-23 10:54:58.396017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.012 qpair failed and we were unable to recover it. 00:34:10.012 [2024-07-23 10:54:58.396170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.396236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.396380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.396469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.396623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.396677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.396819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.396874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.396961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.396987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.397095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.397150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.397268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.397321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.397502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.397551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.397711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.397774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.397920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.397980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.398130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.398203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.398368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.398430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.398607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.398670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.398856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.398918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.399140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.399168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.399342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.399408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.399556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.399635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.399779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.399847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.400026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.400054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.400196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.400272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.400430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.400507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.400684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.400743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.400902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.400964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.401134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.401198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.401379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.401425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.401624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.401693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.401781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.401806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.401932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.401984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.402158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.402354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.402542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.402723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.402850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.402968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.403036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.403190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.403255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.403470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.403542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.403698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.403758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.013 [2024-07-23 10:54:58.403955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.013 [2024-07-23 10:54:58.404015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.013 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.404175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.404237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.404462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.404494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.404637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.404691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.404842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.404914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.405076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.405137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.405358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.405417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.405650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.405711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.405858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.405916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.406072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.406133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.406370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.406429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.406607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.406676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.406863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.406918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.407847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.407977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.408924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.408950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.409813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.409864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.014 [2024-07-23 10:54:58.410931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.014 [2024-07-23 10:54:58.410964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.014 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.411970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.411996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.412903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.412958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.413086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.413139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.413277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.413329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.413419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.413449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.413576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.413649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.413818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.413878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.414030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.414092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.414327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.414355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.414498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.414568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.414704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.414772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.414913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.414989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.415128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.415195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.415394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.415459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.415587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.415643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.415729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.415754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.415836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.415863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.015 [2024-07-23 10:54:58.416013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.015 [2024-07-23 10:54:58.416064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.015 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.416909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.416934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.417953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.417982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.418139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.418206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.418366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.418438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.418620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.418677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.418780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.418804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.418886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.418910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.419865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.419920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.420116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.420278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.420475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.420599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.420789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.420974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.421030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.421184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.421250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.421411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.421472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.421660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.421707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.421920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.421948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.422139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.422166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.422405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.422433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.422600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.422662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.016 [2024-07-23 10:54:58.422840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.016 [2024-07-23 10:54:58.422868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.016 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.423043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.423105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.423246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.423323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.423510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.423571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.423717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.423809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.423970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.424029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.424167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.424231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.424369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.424439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.424715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.424775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.425022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.425081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.425223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.425284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.425516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.425563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.425745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.425812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.425965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.426026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.426173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.426242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.426501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.426552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.017 [2024-07-23 10:54:58.426700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.017 qpair failed and we were unable to recover it. 00:34:10.017 [2024-07-23 10:54:58.426908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.426968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.427146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.427206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.427349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.427428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.427585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.427623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.427763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.427816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.427910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.427938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.428896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.428923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.429871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.429924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.430834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.430952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.302 qpair failed and we were unable to recover it. 00:34:10.302 [2024-07-23 10:54:58.431945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.302 [2024-07-23 10:54:58.431994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.432103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.432175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.432349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.432409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.432568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.432629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.432764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.432829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.432971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.433041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.433202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.433262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.433421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.433501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.433681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.433712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.433884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.433943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.434108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.434140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.434324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.434383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.434592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.434657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.434819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.434886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.435040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.435102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.435266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.435326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.435529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.435566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.435708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.435775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.435918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.435973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.436196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.436222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.436381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.436432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.436612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.436675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.436899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.436926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.437099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.437169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.437309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.437378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.437533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.437592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.437756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.437823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.437963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.438884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.438948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.439080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.439129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.439210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.439236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.439354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.439410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.439535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.303 [2024-07-23 10:54:58.439596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.303 qpair failed and we were unable to recover it. 00:34:10.303 [2024-07-23 10:54:58.439716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.439772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.439887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.439943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.440888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.440916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.441902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.441929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.442036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.442090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.442220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.442280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.442500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.442528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.442664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.442728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.442906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.442961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.443165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.443359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.443535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.443642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.443828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.443948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.444839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.444947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.445841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.304 [2024-07-23 10:54:58.445868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.304 qpair failed and we were unable to recover it. 00:34:10.304 [2024-07-23 10:54:58.446046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.446254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.446431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.446599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.446727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.446873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.446899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.447067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.447132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.447294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.447356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.447518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.447581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.447745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.447805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.447959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.448029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.448170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.448235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.448387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.448441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.448658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.448686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.448852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.448919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.449917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.449971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.450836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.450974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.451878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.451920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.452032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.452088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.452216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.452259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.305 [2024-07-23 10:54:58.452388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.305 [2024-07-23 10:54:58.452431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.305 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.452524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.452551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.452669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.452723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.452803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.452828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.452915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.452946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.453934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.453992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.454930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.454982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.455104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.455298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.455411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.455563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.455791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.455945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.456885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.456977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.457007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.457098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.457134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.457221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.306 [2024-07-23 10:54:58.457259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.306 qpair failed and we were unable to recover it. 00:34:10.306 [2024-07-23 10:54:58.457371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.457398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.457488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.457515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.458310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.458353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.458451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.458477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.458588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.458646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.458743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.458802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.458911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.458967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.459899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.459937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.460104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.460161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.460315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.460375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.460522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.460576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.460724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.460776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.460938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.460991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.461121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.461169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.461297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.461358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.461522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.461549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.461726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.461794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.461943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.462140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.462355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.462531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.462709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.462875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.462931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.463903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.463956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.464048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.307 [2024-07-23 10:54:58.464074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.307 qpair failed and we were unable to recover it. 00:34:10.307 [2024-07-23 10:54:58.464156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.464182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.464295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.464346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.465897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.465923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.466895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.466979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.467894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.467922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.468930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.468957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.469043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.469068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.469151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.469184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.469406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.308 [2024-07-23 10:54:58.469433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.308 qpair failed and we were unable to recover it. 00:34:10.308 [2024-07-23 10:54:58.469533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.469560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.469644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.469671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.469759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.469787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.469889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.469914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.469994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.470928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.470954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.471936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.471963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.472942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.472966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.309 [2024-07-23 10:54:58.473782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.309 qpair failed and we were unable to recover it. 00:34:10.309 [2024-07-23 10:54:58.473865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.473891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.473993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.474949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.474974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.475935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.475962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.476906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.476933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.477903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.477992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.478018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.478110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.478134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.478224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.478250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.310 [2024-07-23 10:54:58.478344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.310 [2024-07-23 10:54:58.478369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.310 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.478465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.478498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.478579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.478603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.478698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.478723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.478801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.478826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.478907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.478931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.479955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.479983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.480899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.480924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.481686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.481711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.484552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.484588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.484682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.484708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.484790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.484818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.484899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.311 [2024-07-23 10:54:58.484923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.311 qpair failed and we were unable to recover it. 00:34:10.311 [2024-07-23 10:54:58.485015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.485956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.485983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.486905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.486930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.487907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.487940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.488941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.488966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.489057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.489081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.489163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.489187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.489279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.489305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.489393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.312 [2024-07-23 10:54:58.489419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.312 qpair failed and we were unable to recover it. 00:34:10.312 [2024-07-23 10:54:58.489501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.489529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.489625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.489651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.489730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.489756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.489834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.489860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.489948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.489973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.490887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.490976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.491951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.491977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.492892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.492916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.313 [2024-07-23 10:54:58.493833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.313 qpair failed and we were unable to recover it. 00:34:10.313 [2024-07-23 10:54:58.493920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.493946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.494963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.494989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.495966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.495990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.496948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.496973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.497891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.497981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.498006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.498095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.498120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.314 qpair failed and we were unable to recover it. 00:34:10.314 [2024-07-23 10:54:58.498205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.314 [2024-07-23 10:54:58.498231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.498909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.498934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.499969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.499996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.500896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.500921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.501872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.501985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.502033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.502122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.502149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.502238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.502264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.502354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.315 [2024-07-23 10:54:58.502380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.315 qpair failed and we were unable to recover it. 00:34:10.315 [2024-07-23 10:54:58.502476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.502515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.502612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.502638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.502732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.502761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.502849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.502876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.502965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.502991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.503892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.503982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.504951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.504976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.505885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.505975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.506001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.506084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.506110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.506198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.506224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.506310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.316 [2024-07-23 10:54:58.506336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.316 qpair failed and we were unable to recover it. 00:34:10.316 [2024-07-23 10:54:58.506420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.506446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.506547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.506573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.506662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.506688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.506768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.506794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.506879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.506905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.506987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.507901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.507990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.508931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.508960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.509910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.509935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.317 [2024-07-23 10:54:58.510737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.317 [2024-07-23 10:54:58.510770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.317 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.510855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.510880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.510962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.510987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.511948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.511981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.512943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.512968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.513891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.513994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.514933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.514963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.318 [2024-07-23 10:54:58.515049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.318 [2024-07-23 10:54:58.515073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.318 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.515907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.515931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.516943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.516968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.517951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.517976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.518921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.518947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.519045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.519072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.519160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.519187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.519276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.519302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.519385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.519410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.319 [2024-07-23 10:54:58.519508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.319 [2024-07-23 10:54:58.519533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.319 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.519613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.519637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.519722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.519747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.519849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.519873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.519962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.519989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.520916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.520996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.521895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.521984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.522895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.522982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.523014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.523104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.523138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.523232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.523261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.523356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.320 [2024-07-23 10:54:58.523383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.320 qpair failed and we were unable to recover it. 00:34:10.320 [2024-07-23 10:54:58.523487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.523517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.523618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.523644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.523739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.523765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.523859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.523885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.523971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.523997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.524885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.524912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.525909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.525997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.526947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.526972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.527912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.321 [2024-07-23 10:54:58.527938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-23 10:54:58.528024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.528947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.528974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.529961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.529987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.530926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.530952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.531905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.531931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-23 10:54:58.532628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-23 10:54:58.532654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.532748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.532777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.532863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.532890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.532972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.532997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.533916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.533942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.534888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.534980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.535932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.535963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.536914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-23 10:54:58.536992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-23 10:54:58.537017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.537950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.537977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.538895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.538923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.539946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.539972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.540905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.540994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.541021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.541110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.541137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.541221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.541246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.541329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.541353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-07-23 10:54:58.541444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-23 10:54:58.541469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.541562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.541588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.541672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.541698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.541783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.541810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.541906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.541931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.542951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.542976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.543897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.543922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.544960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.544987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-07-23 10:54:58.545070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-23 10:54:58.545095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.545960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.545985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.546917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.546999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.547946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.547971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.548908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.548995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.549112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.549232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.549352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.549463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-07-23 10:54:58.549587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-07-23 10:54:58.549613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.549693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.549718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.549808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.549833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.549920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.549946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.550942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.550968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.551894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.551922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.552890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.552980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.553917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.553942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-07-23 10:54:58.554032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-07-23 10:54:58.554057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.554878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.554973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.555898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.555923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.556893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.556923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.557967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.557993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.558087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.558114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.558208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.558236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.558335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.558364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.558458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-07-23 10:54:58.558497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-07-23 10:54:58.558593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.558620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.558706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.558732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.558814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.558841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.558923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.558949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.559959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.559995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.560894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.560920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.561938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.561964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.562043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.562069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.562157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.562182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.562263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.562289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.562374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-07-23 10:54:58.562399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-07-23 10:54:58.562493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.562522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.562620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.562645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.562733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.562758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.562853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.562879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.562963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.562988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.563914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.563939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.564899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.564987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.565905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.565931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-07-23 10:54:58.566877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-07-23 10:54:58.566904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.566992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.567940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.567970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.568928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.568954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.569886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.569977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.570916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.570993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.571019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.571108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.571136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.571220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-07-23 10:54:58.571246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-07-23 10:54:58.571335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.571450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.571581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.571688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.571803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.571929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.571964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.572906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.572931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.573971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.573997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.574891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.574983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-07-23 10:54:58.575695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-07-23 10:54:58.575784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.575809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.575891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.575916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.576942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.576968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.577920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.577997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.578925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.578951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.579887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.579980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.580007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.580086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.580111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.580192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-07-23 10:54:58.580219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-07-23 10:54:58.580315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.580435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.580570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.580685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.580801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.580911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.580938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.581880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.581977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.582906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.582991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-07-23 10:54:58.583795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-07-23 10:54:58.583890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.583917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.584966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.584991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.585901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.585926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.586932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.586957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.587936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.587960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.588049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.588074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.588171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-07-23 10:54:58.588198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-07-23 10:54:58.588285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.588955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.588979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.589957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.589988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.590904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.590995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.591959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.591987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-07-23 10:54:58.592651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-07-23 10:54:58.592675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.592753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.592777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.592867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.592896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.592990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.593919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.593946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.594897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.594922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.595956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.595982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.596904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.596929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.597016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.597049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-07-23 10:54:58.597133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-07-23 10:54:58.597160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.597970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.597997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.598914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.598942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.599941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.599966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.600961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.600987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.601083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.601110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.601186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.601212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.601304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.601332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.601416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.601446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-07-23 10:54:58.601559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-07-23 10:54:58.601594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.601684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.601711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.601797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.601825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.601905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.601930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.602906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.602997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.603965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.603993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.604925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.604951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.605040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.605066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.605150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.605178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.605255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.605281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.605367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-07-23 10:54:58.605393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-07-23 10:54:58.605486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.605513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.605612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.605639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.605724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.605750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.605846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.605872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.605961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.605987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.606917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.606944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.607908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.607934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.608884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.608975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.340 [2024-07-23 10:54:58.609883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.340 qpair failed and we were unable to recover it. 00:34:10.340 [2024-07-23 10:54:58.609961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.609985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.610904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.610928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.611948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.611973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.612894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.612919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.613903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.613928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.614006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.614031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.614113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.614137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.614226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.614253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.341 qpair failed and we were unable to recover it. 00:34:10.341 [2024-07-23 10:54:58.614341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.341 [2024-07-23 10:54:58.614367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.614460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.614493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.614587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.614612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.614697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.614723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.614809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.614835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.614923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.614954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.615881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.615974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.616966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.616991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.617910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.617939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.342 qpair failed and we were unable to recover it. 00:34:10.342 [2024-07-23 10:54:58.618871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.342 [2024-07-23 10:54:58.618898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.618984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.619895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.619919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.620945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.620971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.621910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.621991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.622109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.622220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.622337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.622450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.343 [2024-07-23 10:54:58.622577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.343 [2024-07-23 10:54:58.622604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.343 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.622692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.622719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.622809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.622840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.622933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.622961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.623886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.623914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.624923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.624947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.625942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.625973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.344 [2024-07-23 10:54:58.626914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.344 [2024-07-23 10:54:58.626938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.344 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.627934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.627963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.628871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.628972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.629924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.629948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.630890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.630990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.345 [2024-07-23 10:54:58.631737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.345 qpair failed and we were unable to recover it. 00:34:10.345 [2024-07-23 10:54:58.631824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.631851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.631951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.631978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.632920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.632946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.633971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.633996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.634915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.634942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.635879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.635907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.636032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.636144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.636262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.636371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.346 [2024-07-23 10:54:58.636492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.346 qpair failed and we were unable to recover it. 00:34:10.346 [2024-07-23 10:54:58.636587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.636615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.636695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.636720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.636809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.636837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.636927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.636954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.637882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.637980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.638949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.638975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.639942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.639968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.640897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.640988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.641014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.641110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.641137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.641231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.641260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.641354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.641382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.641471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.347 [2024-07-23 10:54:58.641503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.347 qpair failed and we were unable to recover it. 00:34:10.347 [2024-07-23 10:54:58.641594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.641621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.641699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.641725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.641815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.641843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.641931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.641962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.642912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.642945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.643893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.643922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.644892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.644917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.645005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.645031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.645120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.645146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.348 [2024-07-23 10:54:58.645226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.348 [2024-07-23 10:54:58.645252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.348 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.645367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.645473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.645644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.645782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.645906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.645996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.646888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.646976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.647922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.647947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.648964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.648995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.649107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.649161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.349 [2024-07-23 10:54:58.649253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.349 [2024-07-23 10:54:58.649282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.349 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.649939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.649964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.650893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.650927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.651923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.651948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.652905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.652933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.350 qpair failed and we were unable to recover it. 00:34:10.350 [2024-07-23 10:54:58.653954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.350 [2024-07-23 10:54:58.653989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.654902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.654927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.655917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.655951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.656892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.656920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.657902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.657986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.658012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.658093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.658119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.658200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.658225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.351 [2024-07-23 10:54:58.658327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.351 [2024-07-23 10:54:58.658357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.351 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.658448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.658475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.658582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.658628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.658738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.658774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.658871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.658899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.658987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.659892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.659921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.660924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.660949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.661957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.661984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.352 qpair failed and we were unable to recover it. 00:34:10.352 [2024-07-23 10:54:58.662636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.352 [2024-07-23 10:54:58.662662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.662744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.662770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.662857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.662884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.663838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.663974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.664882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.664974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.665907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.665933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.666021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.666046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.353 qpair failed and we were unable to recover it. 00:34:10.353 [2024-07-23 10:54:58.666136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.353 [2024-07-23 10:54:58.666161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.666928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.666953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.667945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.667971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.668956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.668981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.669911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.669998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.670025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.670113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.670139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.670230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.670273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.354 [2024-07-23 10:54:58.670379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.354 [2024-07-23 10:54:58.670409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.354 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.670515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.670543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.670628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.670655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.670743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.670770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.670858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.670884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.670967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.670993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.671889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.671984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.672880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.672978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.673965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.673990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.355 [2024-07-23 10:54:58.674835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.355 [2024-07-23 10:54:58.674880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.355 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.674973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.674998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.675964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.675991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.676907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.676997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.677911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.677937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.678907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.678989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.679017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.679098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.679125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.679214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.679242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.679331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.679357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.679445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.356 [2024-07-23 10:54:58.679471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.356 qpair failed and we were unable to recover it. 00:34:10.356 [2024-07-23 10:54:58.679573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.679599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.679691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.679717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.679794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.679820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.679907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.679937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.680891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.680980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.681892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.681918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.682958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.682994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.683090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.683118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.357 qpair failed and we were unable to recover it. 00:34:10.357 [2024-07-23 10:54:58.683205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.357 [2024-07-23 10:54:58.683233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.683430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.683456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.683566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.683600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.683708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.683734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.683820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.683847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.683929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.683955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.684969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.684997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.685919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.685946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.686899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.686981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.687930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.687962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.688046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.688072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.358 [2024-07-23 10:54:58.688186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.358 [2024-07-23 10:54:58.688230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.358 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.688334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.688370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.688469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.688506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.688606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.688632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.688746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.688774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.688885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.688911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.689890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.689918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.690966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.690991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.691733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.691776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.692005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3951233 Killed "${NVMF_APP[@]}" "$@" 00:34:10.359 [2024-07-23 10:54:58.692183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.692331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.692463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:10.359 [2024-07-23 10:54:58.692592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:10.359 [2024-07-23 10:54:58.692833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.692874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.359 [2024-07-23 10:54:58.692982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 [2024-07-23 10:54:58.693011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.359 [2024-07-23 10:54:58.693119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.359 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:10.359 [2024-07-23 10:54:58.693151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.359 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.693915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.693954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.694870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.694925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.695872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.695985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.696904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.696929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 [2024-07-23 10:54:58.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.360 [2024-07-23 10:54:58.697916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.360 qpair failed and we were unable to recover it. 00:34:10.360 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3951665 00:34:10.360 [2024-07-23 10:54:58.698010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.698139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3951665 00:34:10.361 [2024-07-23 10:54:58.698167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.698280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.698408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3951665 ']' 00:34:10.361 [2024-07-23 10:54:58.698434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.698529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.361 [2024-07-23 10:54:58.698647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:10.361 [2024-07-23 10:54:58.698775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.361 [2024-07-23 10:54:58.698916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.698946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:10.361 [2024-07-23 10:54:58.699046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 10:54:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.361 [2024-07-23 10:54:58.699197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.699873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.699899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.700927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.700955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.701908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.701992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.702020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.702109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.702135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.702224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.702252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.361 [2024-07-23 10:54:58.702348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.361 [2024-07-23 10:54:58.702374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.361 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.702463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.702500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.702584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.702609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.702690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.702718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.702797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.702824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.702913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.702940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.703904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.703993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.704880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.704993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.705916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.705942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.706034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.706061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.706150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.706179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.362 [2024-07-23 10:54:58.706277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.362 [2024-07-23 10:54:58.706305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.362 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.706399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.706429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.706513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.706539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.706638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.706666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.706780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.706807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.706907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.706934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.707870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.707900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.708960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.708985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.709892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.709919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.710877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.710904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.363 [2024-07-23 10:54:58.711000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.363 [2024-07-23 10:54:58.711026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.363 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.711914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.711939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.712911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.712957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.713929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.713954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.714970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.714994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.715110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.715227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.715359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.715478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.364 [2024-07-23 10:54:58.715601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.364 qpair failed and we were unable to recover it. 00:34:10.364 [2024-07-23 10:54:58.715703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.715730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.715840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.715867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.715979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.716899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.716926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.717905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.717997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.718921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.718947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.719910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.719936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.720026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.720052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.720144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.720172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.365 [2024-07-23 10:54:58.720257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.365 [2024-07-23 10:54:58.720283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.365 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.720399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.720520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.720633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.720789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.720901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.720986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.721910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.721990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.722929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.722954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.723959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.723984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.366 qpair failed and we were unable to recover it. 00:34:10.366 [2024-07-23 10:54:58.724777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.366 [2024-07-23 10:54:58.724803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.724899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.724924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.725958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.725982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.726897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.726923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.727960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.727986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.728084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.728110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.728198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.728224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.728312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.728337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.367 [2024-07-23 10:54:58.728424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.367 [2024-07-23 10:54:58.728450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.367 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.728554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.728581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.728666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.728692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.728774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.728801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.728881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.728907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.728999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.729946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.729972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.730948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.730974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.731953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.368 [2024-07-23 10:54:58.731978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.368 qpair failed and we were unable to recover it. 00:34:10.368 [2024-07-23 10:54:58.732053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.732959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.732988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.733887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.733913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.734962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.734988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.735916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.735943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.736033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.736061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.736149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.736178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.736300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.736327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.736445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.736472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.369 qpair failed and we were unable to recover it. 00:34:10.369 [2024-07-23 10:54:58.736577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.369 [2024-07-23 10:54:58.736605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.736689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.736716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.736805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.736832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.736914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.736940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.737952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.737979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.738892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.738919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.739806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.739868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.740902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.740929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.741010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.741036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.741124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.741150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.741241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.370 [2024-07-23 10:54:58.741274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.370 qpair failed and we were unable to recover it. 00:34:10.370 [2024-07-23 10:54:58.741367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.741393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.741484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.741511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.741627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.741653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.741741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.741769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.741929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.741980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.742966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.742992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.743855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.743883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.744914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.744995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.371 [2024-07-23 10:54:58.745922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.371 [2024-07-23 10:54:58.745949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.371 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.746909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.746992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.747912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.747939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.748949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.748979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.749072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.749099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.749186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.749212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.749307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.749333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.372 qpair failed and we were unable to recover it. 00:34:10.372 [2024-07-23 10:54:58.749422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.372 [2024-07-23 10:54:58.749448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.749539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.749566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.749680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.749707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.749814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.749841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.749934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.749960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.750915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.750998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751783] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:10.373 [2024-07-23 10:54:58.751847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.751878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 [2024-07-23 10:54:58.751880] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.751984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.752913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.752990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.753100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.753214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.753333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.753471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.373 [2024-07-23 10:54:58.753603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.373 [2024-07-23 10:54:58.753630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.373 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.753755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.753808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.753898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.753926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.754908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.754934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.755949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.755976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.756908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.756934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.757933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.757960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.758070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.758132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.374 [2024-07-23 10:54:58.758236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.374 [2024-07-23 10:54:58.758298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.374 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.758413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.758541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.758652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.758773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.758891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.758978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.759920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.759947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.760963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.760993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.761964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.761992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.375 [2024-07-23 10:54:58.762779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.375 qpair failed and we were unable to recover it. 00:34:10.375 [2024-07-23 10:54:58.762862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.762888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.762970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.762996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.763893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.763976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.764938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.764964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.765901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.765928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.766052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.766161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.766275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.766382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.376 [2024-07-23 10:54:58.766507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.376 qpair failed and we were unable to recover it. 00:34:10.376 [2024-07-23 10:54:58.766602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.766629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.766719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.766745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.766836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.766865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.766955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.766981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.767947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.767974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.768907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.768934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.769856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.769967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.377 [2024-07-23 10:54:58.770716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.377 [2024-07-23 10:54:58.770743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.377 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.770845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.770873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.771959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.771987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.772888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.772915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.773906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.773987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.774912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.774938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.775058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.775115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.775245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.775297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.775422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.775493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.378 [2024-07-23 10:54:58.775592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.378 [2024-07-23 10:54:58.775620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.378 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.775702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.775731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.775846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.775907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.776875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.776904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.777908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.777934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.379 [2024-07-23 10:54:58.778885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.379 [2024-07-23 10:54:58.778944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.379 qpair failed and we were unable to recover it. 00:34:10.667 [2024-07-23 10:54:58.779067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.667 [2024-07-23 10:54:58.779120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.667 qpair failed and we were unable to recover it. 00:34:10.667 [2024-07-23 10:54:58.779203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.667 [2024-07-23 10:54:58.779230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.779903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.779934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.780971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.780998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.781966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.781992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.782081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.782109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.782194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.782220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.782310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.782336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.782442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.782470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.668 [2024-07-23 10:54:58.782567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.668 [2024-07-23 10:54:58.782594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.668 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.782675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.782700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.782800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.782827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.782912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.782939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.783890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.783917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.784889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.784944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.785944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.785970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.786054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.786081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.786161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.786187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.669 [2024-07-23 10:54:58.786283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.669 [2024-07-23 10:54:58.786309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.669 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.786424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.786537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.786656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.786767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.786896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.786984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.787895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.787980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.670 [2024-07-23 10:54:58.788801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.788917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.788945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.670 [2024-07-23 10:54:58.789865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.670 qpair failed and we were unable to recover it. 00:34:10.670 [2024-07-23 10:54:58.789959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.789987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.790928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.790981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.791937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.791967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.792922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.792950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.671 [2024-07-23 10:54:58.793679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.671 qpair failed and we were unable to recover it. 00:34:10.671 [2024-07-23 10:54:58.793762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.793788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.793881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.793910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.794956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.794984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.795963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.795988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.796916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.796997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.797022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.797115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.797141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.797227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.797253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.797338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.672 [2024-07-23 10:54:58.797365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.672 qpair failed and we were unable to recover it. 00:34:10.672 [2024-07-23 10:54:58.797455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.797487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.797578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.797605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.797684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.797710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.797794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.797820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.797912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.797939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.798910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.798937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.799916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.799944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.800044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.800071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.800152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.800178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.800273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.673 [2024-07-23 10:54:58.800300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.673 qpair failed and we were unable to recover it. 00:34:10.673 [2024-07-23 10:54:58.800381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.800407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.800503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.800530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.800640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.800666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.800757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.800783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.800867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.800893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.800984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.801837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.801863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.802936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.802963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.803896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.803986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.804013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.804100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.804127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.674 qpair failed and we were unable to recover it. 00:34:10.674 [2024-07-23 10:54:58.804209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.674 [2024-07-23 10:54:58.804239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.804347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.804470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.804597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.804727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.804875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.804987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.805950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.805978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.806900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.806927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.807901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.807989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.675 [2024-07-23 10:54:58.808015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.675 qpair failed and we were unable to recover it. 00:34:10.675 [2024-07-23 10:54:58.808115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.808961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.808987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.809932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.809959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.810908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.810934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.676 qpair failed and we were unable to recover it. 00:34:10.676 [2024-07-23 10:54:58.811856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.676 [2024-07-23 10:54:58.811882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.811973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.811999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.812945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.812971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.813905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.813932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.814913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.814939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.677 qpair failed and we were unable to recover it. 00:34:10.677 [2024-07-23 10:54:58.815600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.677 [2024-07-23 10:54:58.815628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.815719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.815746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.815845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.815871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.815966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.815994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.816940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.816966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.817894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.817920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.818938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.818964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.819064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.819089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.819167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.819192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.678 [2024-07-23 10:54:58.819277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.678 [2024-07-23 10:54:58.819303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.678 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.819416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.819530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.819654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.819772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.819893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.819980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.820901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.820984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.821920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.821948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.822036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.822062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.679 qpair failed and we were unable to recover it. 00:34:10.679 [2024-07-23 10:54:58.822165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.679 [2024-07-23 10:54:58.822194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.680 [2024-07-23 10:54:58.822877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.822907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.822996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.823942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.823968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.824897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.824993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.680 [2024-07-23 10:54:58.825794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.680 qpair failed and we were unable to recover it. 00:34:10.680 [2024-07-23 10:54:58.825877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.825903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.825991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.826900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.826990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.827883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.827983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.828964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.828992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.829081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.829107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.829196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.829223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.829313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.829338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.829435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.829461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.681 qpair failed and we were unable to recover it. 00:34:10.681 [2024-07-23 10:54:58.829558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.681 [2024-07-23 10:54:58.829585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.829677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.829703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.829800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.829830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.829920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.829947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.830895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.830994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.831960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.831989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.832954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.832980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.833073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.833100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.833188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.833215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.833301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.833332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.682 qpair failed and we were unable to recover it. 00:34:10.682 [2024-07-23 10:54:58.833423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.682 [2024-07-23 10:54:58.833450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.833540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.833566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.833648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.833674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.833775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.833801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.833904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.833930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.834884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.834982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.835870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.835979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.836946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.836971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.837052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.837079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.683 [2024-07-23 10:54:58.837170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.683 [2024-07-23 10:54:58.837199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.683 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.837985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.838921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.838948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.839937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.839964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.840048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.840075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.840168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.840196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.684 qpair failed and we were unable to recover it. 00:34:10.684 [2024-07-23 10:54:58.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.684 [2024-07-23 10:54:58.840311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.840971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.840998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.841889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.841916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.842884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.842976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.843911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.685 [2024-07-23 10:54:58.843936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.685 qpair failed and we were unable to recover it. 00:34:10.685 [2024-07-23 10:54:58.844026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.844899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.844980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.845928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.845955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.846869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.846897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.686 [2024-07-23 10:54:58.847642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.686 qpair failed and we were unable to recover it. 00:34:10.686 [2024-07-23 10:54:58.847733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.847762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.847858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.847889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.847989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.848939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.848965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.849908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.849934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.850971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.850997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.851104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.851130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.851222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.851249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.851339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.687 [2024-07-23 10:54:58.851365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.687 qpair failed and we were unable to recover it. 00:34:10.687 [2024-07-23 10:54:58.851452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.851487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.851572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.851598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.851681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.851706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.851792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.851819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.851928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.851955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.852918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.852943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.853860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.853888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.854962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.854988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.855109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.855137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.688 qpair failed and we were unable to recover it. 00:34:10.688 [2024-07-23 10:54:58.855237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.688 [2024-07-23 10:54:58.855272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.855939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.855965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.856894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.856983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.857917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.857943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.858954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.858982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.859927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.859955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.860046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.860074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.860167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.860196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.860290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.860316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.689 qpair failed and we were unable to recover it. 00:34:10.689 [2024-07-23 10:54:58.860401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.689 [2024-07-23 10:54:58.860428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.860542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.860625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.860651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.860734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.860763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.860843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.860869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.860961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.860989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.861973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.861998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.862920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.862949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.863890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.863988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.690 [2024-07-23 10:54:58.864711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.690 qpair failed and we were unable to recover it. 00:34:10.690 [2024-07-23 10:54:58.864801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.864828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.864914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.864940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.865945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.865970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.866886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.866971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.867910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.867938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.868898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.868989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.691 [2024-07-23 10:54:58.869867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.691 qpair failed and we were unable to recover it. 00:34:10.691 [2024-07-23 10:54:58.869953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.869979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.870901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.870928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.871903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.871989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.872869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.872975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.873941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.873968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.874885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.874974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.875001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.875100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.875130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.875221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.875247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.692 [2024-07-23 10:54:58.875343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.692 [2024-07-23 10:54:58.875370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.692 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.875450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.875475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.875575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.875601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.875689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.875717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.875810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.875836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.875922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.875949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.876882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.876975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.877946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.877975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.878965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.878992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.879926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.879957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.693 [2024-07-23 10:54:58.880663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.693 [2024-07-23 10:54:58.880689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.693 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.880780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.880808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.880899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.880925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.881955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.881981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.882883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.882909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.883903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.883995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.884961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.884990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.885078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.885104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.694 [2024-07-23 10:54:58.885193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.694 [2024-07-23 10:54:58.885221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.694 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.885884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.885913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.886957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.886986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.887913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.887946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.888889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.888976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.889958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.889984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.890076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.890104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.890191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.890216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.890300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.890325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.890406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.695 [2024-07-23 10:54:58.890431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.695 qpair failed and we were unable to recover it. 00:34:10.695 [2024-07-23 10:54:58.890524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.890551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.890646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.890676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.890776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.890805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.890892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.890919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.891889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.891983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.892950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.892976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.893887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.893984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.894928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.894957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.696 [2024-07-23 10:54:58.895683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.696 qpair failed and we were unable to recover it. 00:34:10.696 [2024-07-23 10:54:58.895774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.895801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.895895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.895925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.896877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.896904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.897969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.897999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.898891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.898916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.899966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.899991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.900903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.900930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.697 [2024-07-23 10:54:58.901020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.697 [2024-07-23 10:54:58.901049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.697 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.901962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.901987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.902928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.902957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.903886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.903912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.904941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.904980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.698 [2024-07-23 10:54:58.905707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.698 qpair failed and we were unable to recover it. 00:34:10.698 [2024-07-23 10:54:58.905823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.905850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.905938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.905965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.906888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.906985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.907929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.907957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.908910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.908937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.909895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.909979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.910948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.910976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.699 qpair failed and we were unable to recover it. 00:34:10.699 [2024-07-23 10:54:58.911058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.699 [2024-07-23 10:54:58.911083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.911886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.911912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.912932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.912961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.913950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.913975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.914885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.914912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.915905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.915997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.916023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.916116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.916143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.916240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.916269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.700 [2024-07-23 10:54:58.916367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.700 [2024-07-23 10:54:58.916394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.700 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.916515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.916557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.916662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.916690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.916794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.916820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.916912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.916938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.917883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.917973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.701 [2024-07-23 10:54:58.918296] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.701 [2024-07-23 10:54:58.918311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.701 [2024-07-23 10:54:58.918324] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.701 [2024-07-23 10:54:58.918335] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.701 [2024-07-23 10:54:58.918338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:10.701 [2024-07-23 10:54:58.918494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:10.701 [2024-07-23 10:54:58.918594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80990 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.918973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.918963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:10.701 [2024-07-23 10:54:58.918971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:10.701 [2024-07-23 10:54:58.919075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.919950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.919978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.920889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.920975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.921001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.921099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.921125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.701 [2024-07-23 10:54:58.921222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.701 [2024-07-23 10:54:58.921252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.701 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.921953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.921980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.922924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.922950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.923925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.923962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.924897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.924994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.925871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.925909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.926037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.926075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.926180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.926208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.702 [2024-07-23 10:54:58.926300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.702 [2024-07-23 10:54:58.926329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.702 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.926423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.926460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.926575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.926603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.926697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.926724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.926822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.926856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.926949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.926977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.927893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.927988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.928949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.928975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.929929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.929956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.930961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.930988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.703 [2024-07-23 10:54:58.931816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.703 [2024-07-23 10:54:58.931842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.703 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.931932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.931957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.932911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.932939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.933907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.933933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.934880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.934980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.935937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.935962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.936941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.936969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.937068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.704 [2024-07-23 10:54:58.937093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.704 qpair failed and we were unable to recover it. 00:34:10.704 [2024-07-23 10:54:58.937191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.937904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.937930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.938891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.938993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.939876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.939979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.940924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.940952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.941895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.941922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.942015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.942042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.942144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.942175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.705 qpair failed and we were unable to recover it. 00:34:10.705 [2024-07-23 10:54:58.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.705 [2024-07-23 10:54:58.942297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.942391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.942419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.942513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.942542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.942637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.942664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.942762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.942789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.942882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.942910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.943924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.943951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.944895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.944921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.945898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.945996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.946950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.946977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.947072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.947098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.947184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.706 [2024-07-23 10:54:58.947210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.706 qpair failed and we were unable to recover it. 00:34:10.706 [2024-07-23 10:54:58.947308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.947435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.947579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.947698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.947812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.947928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.947960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.948906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.948933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.949889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.949916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.950877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.950904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.951904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.951998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.952123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.952243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.952365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.952484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.707 qpair failed and we were unable to recover it. 00:34:10.707 [2024-07-23 10:54:58.952613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.707 [2024-07-23 10:54:58.952640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.952735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.952761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.952846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.952878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.952992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.953893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.953920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.954879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.954977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.955930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.955958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.956955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.956982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.957912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.957940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.708 qpair failed and we were unable to recover it. 00:34:10.708 [2024-07-23 10:54:58.958030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.708 [2024-07-23 10:54:58.958057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.958888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.958915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.959947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.959975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.960924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.960951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.961880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.961906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.962904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.962998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.709 [2024-07-23 10:54:58.963027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.709 qpair failed and we were unable to recover it. 00:34:10.709 [2024-07-23 10:54:58.963121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.963955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.963981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.964915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.964941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.965969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.965996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.966925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.966951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.967892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.710 [2024-07-23 10:54:58.967919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.710 qpair failed and we were unable to recover it. 00:34:10.710 [2024-07-23 10:54:58.968009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.968949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.968977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.969940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.969967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.970921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.970950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.971895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.971923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.972902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.972930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.973024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.973051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.973132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.973159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.973254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.711 [2024-07-23 10:54:58.973281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.711 qpair failed and we were unable to recover it. 00:34:10.711 [2024-07-23 10:54:58.973372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.973399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.973502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.973529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.973624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.973651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.973744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.973771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.973864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.973891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.973976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.974955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.974982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.975876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.976892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.976918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.977896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.977990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.712 qpair failed and we were unable to recover it. 00:34:10.712 [2024-07-23 10:54:58.978961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.712 [2024-07-23 10:54:58.978988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.979924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.979951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.980905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.980930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.981909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.981998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.982899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.982925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.983890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.713 qpair failed and we were unable to recover it. 00:34:10.713 [2024-07-23 10:54:58.983984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.713 [2024-07-23 10:54:58.984010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.984950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.984976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.985956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.985984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.986919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.986946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.714 [2024-07-23 10:54:58.987888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.714 [2024-07-23 10:54:58.987914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.714 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.988965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.988991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.989940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.989967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.990894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.990921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.991961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.991988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.992080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.715 [2024-07-23 10:54:58.992109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.715 qpair failed and we were unable to recover it. 00:34:10.715 [2024-07-23 10:54:58.992202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.992910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.992938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.993891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.993980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.994943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.994970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.995895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.995921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.996019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.996045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.996133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.996160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.996255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.996284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.716 [2024-07-23 10:54:58.996374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.716 [2024-07-23 10:54:58.996401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.716 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.996505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.996532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.996621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.996648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.996742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.996769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.996854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.996880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.996971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.996997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.997916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.997943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.998890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.998918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:58.999884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:58.999981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.717 qpair failed and we were unable to recover it. 00:34:10.717 [2024-07-23 10:54:59.000703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.717 [2024-07-23 10:54:59.000729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.000810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.000836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.000925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.000951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.001925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.001951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.002910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.002939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.003905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.003934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.004029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.004057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.004152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.004178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.004269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.004296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.004384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.004411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.718 [2024-07-23 10:54:59.004507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.718 [2024-07-23 10:54:59.004535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.718 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.004625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.004653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.004747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.004775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.004863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.004889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.004978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.005953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.005981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.006956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.006985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.007948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.007974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.719 [2024-07-23 10:54:59.008708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.719 qpair failed and we were unable to recover it. 00:34:10.719 [2024-07-23 10:54:59.008794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.008820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.008913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.008941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.009890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.009987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.010929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.010955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.011921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.011950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.012893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.012986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.720 [2024-07-23 10:54:59.013011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.720 qpair failed and we were unable to recover it. 00:34:10.720 [2024-07-23 10:54:59.013103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.013942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.013968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.014887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.014912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.015882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.015977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.016943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.016971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.017062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.017088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.017181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.017208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.017297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.017325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.017415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.721 [2024-07-23 10:54:59.017442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.721 qpair failed and we were unable to recover it. 00:34:10.721 [2024-07-23 10:54:59.017546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.017574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.017657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.017683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.017770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.017796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.017884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.017911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.018953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.018979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.019917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.019943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.020897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.020983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.722 qpair failed and we were unable to recover it. 00:34:10.722 [2024-07-23 10:54:59.021942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.722 [2024-07-23 10:54:59.021968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.022896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.022923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.023969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.023997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.024948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.024974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.723 qpair failed and we were unable to recover it. 00:34:10.723 [2024-07-23 10:54:59.025672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.723 [2024-07-23 10:54:59.025699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.025823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.025850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.025942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.025968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.026895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.026921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.027947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.027973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.028899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.028925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.029903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.029987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.030013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.724 qpair failed and we were unable to recover it. 00:34:10.724 [2024-07-23 10:54:59.030109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.724 [2024-07-23 10:54:59.030136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.030953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.030979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.031893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.031919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.032881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.032975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.033917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.033948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.034139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.034166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.034249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.034275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.034366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.725 [2024-07-23 10:54:59.034392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.725 qpair failed and we were unable to recover it. 00:34:10.725 [2024-07-23 10:54:59.034494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.034522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.034611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.034638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.034726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.034752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.034847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.034875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.034969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.034996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.035091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.035236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.035381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.035523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:10.726 [2024-07-23 10:54:59.035661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:10.726 [2024-07-23 10:54:59.035784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.035899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.035926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:10.726 [2024-07-23 10:54:59.036021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.726 [2024-07-23 10:54:59.036135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.726 [2024-07-23 10:54:59.036255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.036892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.036980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.037950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.037977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.726 [2024-07-23 10:54:59.038805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.726 qpair failed and we were unable to recover it. 00:34:10.726 [2024-07-23 10:54:59.038906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.038934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.039906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.039995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.040897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.040992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.041964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.041997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.042917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.042943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.043024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.043050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.043135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.043161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.727 [2024-07-23 10:54:59.043257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.727 [2024-07-23 10:54:59.043283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.727 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.043403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.043533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.043663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.043788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.043904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.043986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.044873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.044972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.045961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.045990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.046913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.728 [2024-07-23 10:54:59.046940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.728 qpair failed and we were unable to recover it. 00:34:10.728 [2024-07-23 10:54:59.047031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.047903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.047996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.048883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.048985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.049886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.049981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.050889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.050985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.051011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.051098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.051123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.051212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.051238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.051330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.051357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.051452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.729 [2024-07-23 10:54:59.051485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.729 qpair failed and we were unable to recover it. 00:34:10.729 [2024-07-23 10:54:59.051586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.051613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.051705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.051735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.051824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.051852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.051935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.051962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.052898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.052923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.053895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.053987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.054893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.054997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.055957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.055984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.730 [2024-07-23 10:54:59.056080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.730 [2024-07-23 10:54:59.056108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.730 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.056936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.056962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.057904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.057931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.058881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.058973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.059882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.059975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.060097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.060207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.060321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.060439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.731 [2024-07-23 10:54:59.060562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.731 [2024-07-23 10:54:59.060589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.731 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.060679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.060705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.060794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.060821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.060910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.060937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.061923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.061957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.732 [2024-07-23 10:54:59.062292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.732 [2024-07-23 10:54:59.062564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.062910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.062936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.063906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.063932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.732 [2024-07-23 10:54:59.064749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.732 [2024-07-23 10:54:59.064775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.732 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.064864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.064891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.064974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.065947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.065973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.066901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.066928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.067900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.067986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.068012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.068103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.068130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.068224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.068251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.068334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.068361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.733 [2024-07-23 10:54:59.068450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.733 [2024-07-23 10:54:59.068476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.733 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.068584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.068610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.068710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.068736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.068832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.068858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.068950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.068977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.069957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.069983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.070940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.070966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.071911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.071941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.734 [2024-07-23 10:54:59.072781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.734 qpair failed and we were unable to recover it. 00:34:10.734 [2024-07-23 10:54:59.072872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.072900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.072996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.073933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.073959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.074910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.074936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.075892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.075919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.076879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.076907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.077001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.077028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.077124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.077154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.077254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.077280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.077380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.077407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.735 [2024-07-23 10:54:59.077504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.735 [2024-07-23 10:54:59.077531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.735 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.077642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.077669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.077773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.077800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.077887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.077914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.078910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.078937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.079902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.079928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.080973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.080999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.081931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.081960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.082057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.736 [2024-07-23 10:54:59.082087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.736 qpair failed and we were unable to recover it. 00:34:10.736 [2024-07-23 10:54:59.082188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.082927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.082955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.083943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.083970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.084930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.084957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.085909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.737 [2024-07-23 10:54:59.085936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.737 qpair failed and we were unable to recover it. 00:34:10.737 [2024-07-23 10:54:59.086026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.086883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.086977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.087957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.087987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 Malloc0 00:34:10.738 [2024-07-23 10:54:59.088349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.088706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.738 [2024-07-23 10:54:59.088830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.088869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:10.738 [2024-07-23 10:54:59.088989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.738 [2024-07-23 10:54:59.089157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.738 [2024-07-23 10:54:59.089281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.089407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.089577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.089733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.089858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.089886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.089977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.090005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.090093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.090120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.738 [2024-07-23 10:54:59.090222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.738 [2024-07-23 10:54:59.090256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.738 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.090384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.090519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.090638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.090768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.090882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.090972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.091886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.091982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.739 [2024-07-23 10:54:59.092098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.092889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.092917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.093905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.093997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.739 [2024-07-23 10:54:59.094683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.739 qpair failed and we were unable to recover it. 00:34:10.739 [2024-07-23 10:54:59.094774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.094803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.094892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.094918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.095964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.095990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.096928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.096955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.097945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.097972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.098895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.098983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.099015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.099109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.099135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.099227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.099256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.740 [2024-07-23 10:54:59.099346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.740 [2024-07-23 10:54:59.099373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.740 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.099464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.099496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.099580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.099606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.099692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.099718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.099817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.099845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.099931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.099957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.741 [2024-07-23 10:54:59.100299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.741 [2024-07-23 10:54:59.100414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.741 [2024-07-23 10:54:59.100582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.741 [2024-07-23 10:54:59.100697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.100908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.100936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.101886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.101913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.102971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.102998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.103114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.103271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.103385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.103518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.741 [2024-07-23 10:54:59.103641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.741 qpair failed and we were unable to recover it. 00:34:10.741 [2024-07-23 10:54:59.103747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.103774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.103863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.103889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.103972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.103998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.104929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.104957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.105950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.105976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.106936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.106967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.107064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.107093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.107189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.107217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.107310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.107337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.107431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.742 [2024-07-23 10:54:59.107460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.742 qpair failed and we were unable to recover it. 00:34:10.742 [2024-07-23 10:54:59.107570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.107598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.107687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.107714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.107804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.107831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.107923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.107951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.108045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.108162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.743 [2024-07-23 10:54:59.108276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.108396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.108518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.743 [2024-07-23 10:54:59.108649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.743 [2024-07-23 10:54:59.108771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.108895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.108934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.109971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.109998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.110958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.110984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.743 qpair failed and we were unable to recover it. 00:34:10.743 [2024-07-23 10:54:59.111928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.743 [2024-07-23 10:54:59.111955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.112931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.112958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.113916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.113943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.114896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.114984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.115953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.115990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.116108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.116150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.116275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.744 [2024-07-23 10:54:59.116314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.116434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.116474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b9 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.744 0 with addr=10.0.0.2, port=4420 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.744 [2024-07-23 10:54:59.116598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.744 [2024-07-23 10:54:59.116627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.744 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.744 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.116722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.116754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.116833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.116859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.116979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.117942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.117970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.118910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.118936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6f0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e0000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.119967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.119993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.120085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.745 [2024-07-23 10:54:59.120111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb6e8000b90 with addr=10.0.0.2, port=4420 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 [2024-07-23 10:54:59.120460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.745 [2024-07-23 10:54:59.122797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.745 [2024-07-23 10:54:59.122923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.745 [2024-07-23 10:54:59.122952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.745 [2024-07-23 10:54:59.122968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.745 [2024-07-23 10:54:59.122982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:10.745 [2024-07-23 10:54:59.123018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.745 qpair failed and we were unable to recover it. 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.745 [2024-07-23 10:54:59.132677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.745 10:54:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3951266 00:34:10.745 [2024-07-23 10:54:59.132790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.745 [2024-07-23 10:54:59.132824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.745 [2024-07-23 10:54:59.132842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.745 [2024-07-23 10:54:59.132856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:10.745 [2024-07-23 10:54:59.132891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.745 qpair failed and we were unable to recover it. 00:34:11.006 [2024-07-23 10:54:59.142739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.006 [2024-07-23 10:54:59.142874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.006 [2024-07-23 10:54:59.142902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.006 [2024-07-23 10:54:59.142918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.006 [2024-07-23 10:54:59.142935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.006 [2024-07-23 10:54:59.142980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.006 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.152619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.152754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.152781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.152797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.152817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.152849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.162686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.162785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.162812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.162828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.162842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.162873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.172788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.172915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.172943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.172958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.172972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.173003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.182693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.182786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.182813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.182829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.182842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.182873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.192746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.192842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.192870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.192886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.192900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.192931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.202863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.202974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.203004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.203020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.203034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.203065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.212759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.212859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.212890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.212907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.212921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.212955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.222833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.222923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.222951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.222967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.222980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.223011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.232859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.232957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.232983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.007 [2024-07-23 10:54:59.232998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.007 [2024-07-23 10:54:59.233012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.007 [2024-07-23 10:54:59.233043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.007 qpair failed and we were unable to recover it. 00:34:11.007 [2024-07-23 10:54:59.242901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.007 [2024-07-23 10:54:59.242990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.007 [2024-07-23 10:54:59.243016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.243037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.243052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.243083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.252930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.253019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.253045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.253061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.253075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.253106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.262907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.263001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.263027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.263043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.263057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.263088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.272938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.273045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.273072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.273088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.273102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.273134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.282967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.283069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.283100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.283116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.283130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.283163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.293026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.293141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.293168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.293184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.293198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.293229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.303025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.303140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.303167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.303183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.303196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.303228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.313037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.313141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.313168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.313184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.313197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.313228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.323087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.323180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.323211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.323227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.323241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.323275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.333121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.008 [2024-07-23 10:54:59.333216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.008 [2024-07-23 10:54:59.333243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.008 [2024-07-23 10:54:59.333266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.008 [2024-07-23 10:54:59.333280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.008 [2024-07-23 10:54:59.333312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.008 qpair failed and we were unable to recover it. 00:34:11.008 [2024-07-23 10:54:59.343153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.343248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.343280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.343295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.343309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.343339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.353150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.353247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.353274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.353290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.353303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.353335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.363220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.363319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.363346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.363362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.363376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.363407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.373276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.373385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.373411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.373427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.373441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.373472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.383243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.383390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.383420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.383437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.383451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.383489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.393344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.393443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.393473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.393503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.393518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.393564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.403396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.403532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.403560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.403576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.403590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.403634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.413357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.413452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.413490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.413508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.413521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.413553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.423497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.423587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.423622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.423639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.009 [2024-07-23 10:54:59.423653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.009 [2024-07-23 10:54:59.423697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.009 qpair failed and we were unable to recover it. 00:34:11.009 [2024-07-23 10:54:59.433399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.009 [2024-07-23 10:54:59.433523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.009 [2024-07-23 10:54:59.433550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.009 [2024-07-23 10:54:59.433565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.433580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.433610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.443426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.443527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.443557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.443573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.443587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.443617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.453417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.453543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.453570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.453586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.453600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.453631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.463455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.463587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.463614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.463630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.463644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.463684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.473488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.473581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.473606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.473622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.473636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.473667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.483535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.483634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.483660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.483675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.483689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.483720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.493528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.493617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.493646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.493662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.493676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.493707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.010 [2024-07-23 10:54:59.503587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.010 [2024-07-23 10:54:59.503707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.010 [2024-07-23 10:54:59.503733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.010 [2024-07-23 10:54:59.503749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.010 [2024-07-23 10:54:59.503762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.010 [2024-07-23 10:54:59.503793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.010 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.513601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.513694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.270 [2024-07-23 10:54:59.513726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.270 [2024-07-23 10:54:59.513742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.270 [2024-07-23 10:54:59.513756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.270 [2024-07-23 10:54:59.513787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.270 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.523659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.523747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.270 [2024-07-23 10:54:59.523772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.270 [2024-07-23 10:54:59.523787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.270 [2024-07-23 10:54:59.523800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.270 [2024-07-23 10:54:59.523831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.270 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.533669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.533784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.270 [2024-07-23 10:54:59.533813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.270 [2024-07-23 10:54:59.533829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.270 [2024-07-23 10:54:59.533843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.270 [2024-07-23 10:54:59.533874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.270 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.543722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.543858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.270 [2024-07-23 10:54:59.543885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.270 [2024-07-23 10:54:59.543901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.270 [2024-07-23 10:54:59.543914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.270 [2024-07-23 10:54:59.543945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.270 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.553743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.553852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.270 [2024-07-23 10:54:59.553880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.270 [2024-07-23 10:54:59.553895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.270 [2024-07-23 10:54:59.553915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.270 [2024-07-23 10:54:59.553946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.270 qpair failed and we were unable to recover it. 00:34:11.270 [2024-07-23 10:54:59.563741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.270 [2024-07-23 10:54:59.563837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.563873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.563887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.563898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.563925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.573772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.573850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.573873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.573886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.573898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.573924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.583755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.583857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.583880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.583894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.583906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.583933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.593807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.593929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.593955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.593969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.593981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.594008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.603812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.603903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.603927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.603941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.603953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.603980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.613878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.613961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.613984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.613998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.614009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.614036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.623955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.624053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.624076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.624090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.624102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.624129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.634003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.634137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.634160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.634174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.634185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.634223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.643957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.644040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.644063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.644076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.644092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.644120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.653979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.654057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.654080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.654094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.654105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.654132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.664046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.664168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.664192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.664206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.664218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.664247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.674030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.674112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.674135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.674149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.674161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.674188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.684072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.684154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.684180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.684194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.684205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.271 [2024-07-23 10:54:59.684232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.271 qpair failed and we were unable to recover it. 00:34:11.271 [2024-07-23 10:54:59.694070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.271 [2024-07-23 10:54:59.694155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.271 [2024-07-23 10:54:59.694182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.271 [2024-07-23 10:54:59.694197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.271 [2024-07-23 10:54:59.694209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.694237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.704139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.704214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.704239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.704256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.704268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.704308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.714208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.714298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.714321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.714334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.714346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.714396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.724149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.724231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.724255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.724268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.724280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.724308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.734170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.734257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.734281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.734299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.734312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.734339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.744306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.744389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.744413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.744426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.744438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.744465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.754259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.754350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.754372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.754385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.754397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.754424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.272 [2024-07-23 10:54:59.764311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.272 [2024-07-23 10:54:59.764396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.272 [2024-07-23 10:54:59.764420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.272 [2024-07-23 10:54:59.764433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.272 [2024-07-23 10:54:59.764445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.272 [2024-07-23 10:54:59.764472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.272 qpair failed and we were unable to recover it. 00:34:11.531 [2024-07-23 10:54:59.774403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.531 [2024-07-23 10:54:59.774492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.531 [2024-07-23 10:54:59.774515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.531 [2024-07-23 10:54:59.774528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.531 [2024-07-23 10:54:59.774540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.531 [2024-07-23 10:54:59.774567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.531 qpair failed and we were unable to recover it. 00:34:11.531 [2024-07-23 10:54:59.784356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.531 [2024-07-23 10:54:59.784439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.531 [2024-07-23 10:54:59.784465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.531 [2024-07-23 10:54:59.784484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.531 [2024-07-23 10:54:59.784497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.784525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.794372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.794464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.794493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.794508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.794520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.794547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.804418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.804543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.804566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.804580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.804592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.804619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.814404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.814487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.814510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.814524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.814536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.814563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.824544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.824665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.824691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.824705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.824717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.824743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.834477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.834581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.834604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.834617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.834629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.834656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.844520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.844603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.844626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.844640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.844652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.844690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.854522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.854646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.854673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.854688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.854699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.854728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.864577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.864652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.864675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.864688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.864700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.864743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.874627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.874728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.874751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.874764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.874776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.874814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.884746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.884840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.884876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.884890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.884901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.884940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.894664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.894744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.894770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.894784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.894795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.894833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.904770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.904865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.904891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.904905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.904917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.904956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.532 [2024-07-23 10:54:59.914748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.532 [2024-07-23 10:54:59.914837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.532 [2024-07-23 10:54:59.914865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.532 [2024-07-23 10:54:59.914879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.532 [2024-07-23 10:54:59.914890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.532 [2024-07-23 10:54:59.914929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.532 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.924785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.924895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.924919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.924945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.924957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.924984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.934952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.935049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.935073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.935087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.935098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.935148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.944949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.945023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.945047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.945060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.945072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.945110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.954866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.954950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.954974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.954988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.955000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.955043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.964934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.965034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.965059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.965072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.965084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.965111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.974850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.974924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.974948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.974961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.974973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.975000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.984904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.984986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.985010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.985024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.985036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.985065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:54:59.994927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:54:59.995009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:54:59.995033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:54:59.995047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:54:59.995059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:54:59.995086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:55:00.005078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:55:00.005185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:55:00.005212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:55:00.005226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:55:00.005238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:55:00.005267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:55:00.015160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:55:00.015286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:55:00.015321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:55:00.015342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:55:00.015360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:55:00.015399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.533 [2024-07-23 10:55:00.025063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.533 [2024-07-23 10:55:00.025182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.533 [2024-07-23 10:55:00.025209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.533 [2024-07-23 10:55:00.025223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.533 [2024-07-23 10:55:00.025235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.533 [2024-07-23 10:55:00.025264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.533 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.035107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.035216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.035240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.035254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.035265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.035292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.045116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.045214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.045237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.045251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.045271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.045299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.055129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.055249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.055275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.055291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.055303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.055343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.065150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.065243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.065266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.065280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.065292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.065320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.075327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.075440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.075463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.075496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.075508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.075547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.085302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.085396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.085421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.085435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.085447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.085474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.095192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.095269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.095292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.095305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.095317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.095343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.105250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.105374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.105398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.105411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.105423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.105450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.115360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.115458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.115486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.115501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.115512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.115539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.125321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.125404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.125428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.125442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.125453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.125486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.135314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.794 [2024-07-23 10:55:00.135391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.794 [2024-07-23 10:55:00.135415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.794 [2024-07-23 10:55:00.135436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.794 [2024-07-23 10:55:00.135449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.794 [2024-07-23 10:55:00.135476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.794 qpair failed and we were unable to recover it. 00:34:11.794 [2024-07-23 10:55:00.145343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.145441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.145465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.145484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.145512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.145541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.155419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.155538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.155561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.155575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.155586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.155614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.165406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.165539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.165563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.165576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.165588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.165615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.175473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.175574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.175598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.175611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.175623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.175651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.185596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.185683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.185706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.185719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.185731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.185758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.195581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.195670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.195693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.195707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.195719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.195745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.205508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.205607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.205630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.205643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.205655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.205681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.215559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.215639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.215664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.215678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.215690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.215716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.225552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.225630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.225657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.225672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.225683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.225710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.235617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.235701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.235724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.235738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.235749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.235776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.245607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.245689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.245715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.245729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.245741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.245768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.255716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.255818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.255841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.255854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.255867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.255905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.265734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.265847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.265869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.265882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.265894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.265926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.275775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.275874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.275900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.275915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.795 [2024-07-23 10:55:00.275927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.795 [2024-07-23 10:55:00.275955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.795 qpair failed and we were unable to recover it. 00:34:11.795 [2024-07-23 10:55:00.285897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.795 [2024-07-23 10:55:00.285988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.795 [2024-07-23 10:55:00.286011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.795 [2024-07-23 10:55:00.286025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.796 [2024-07-23 10:55:00.286037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:11.796 [2024-07-23 10:55:00.286064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.796 qpair failed and we were unable to recover it. 00:34:11.796 [2024-07-23 10:55:00.295775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.295876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.295900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.295915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.295927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.295956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.305817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.305913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.305937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.305950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.305962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.305989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.315860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.315945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.315973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.315988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.316000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.316026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.325834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.325952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.325975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.325989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.326001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.326027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.335848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.335926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.335949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.335963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.335974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.336001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.345883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.345960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.345984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.345997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.346009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.346036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.355916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.356022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.356044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.356058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.356081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.356114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.366092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.366182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.366203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.366217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.366229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.366256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.376016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.376097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.376119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.376132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.376144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.376171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.386120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.386204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.386228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.386241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.386253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.386280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.396058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.396170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.396194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.396208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.396220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.396257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.406052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.406130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.406158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.406173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.406185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.056 [2024-07-23 10:55:00.406212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-07-23 10:55:00.416065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.056 [2024-07-23 10:55:00.416142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.056 [2024-07-23 10:55:00.416164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.056 [2024-07-23 10:55:00.416178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.056 [2024-07-23 10:55:00.416190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.416217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.426189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.426291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.426315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.426329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.426341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.426368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.436248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.436352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.436376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.436389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.436402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.436429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.446165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.446251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.446274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.446287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.446304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.446332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.456168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.456259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.456290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.456305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.456317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.456344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.466192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.466302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.466328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.466342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.466354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.466382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.476265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.476354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.476376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.476390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.476402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.476441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.486265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.486346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.486382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.486396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.486407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.486448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.496295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.496382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.496405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.496419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.496431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.496459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.506318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.506417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.506439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.506453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.506464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.506498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.516358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.516449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.516470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.516490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.516503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.516531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.526414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.526512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.526543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.526557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.526569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.526596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.536410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.536512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.536535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.536554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.536567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.536594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-07-23 10:55:00.546429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.057 [2024-07-23 10:55:00.546558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.057 [2024-07-23 10:55:00.546582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.057 [2024-07-23 10:55:00.546597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.057 [2024-07-23 10:55:00.546609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.057 [2024-07-23 10:55:00.546639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.058 [2024-07-23 10:55:00.556577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.058 [2024-07-23 10:55:00.556689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.058 [2024-07-23 10:55:00.556712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.058 [2024-07-23 10:55:00.556726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.058 [2024-07-23 10:55:00.556738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.058 [2024-07-23 10:55:00.556765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.319 [2024-07-23 10:55:00.566507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.319 [2024-07-23 10:55:00.566632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.319 [2024-07-23 10:55:00.566656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.319 [2024-07-23 10:55:00.566669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.319 [2024-07-23 10:55:00.566681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.566708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.576534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.576646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.576669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.576683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.576695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.576721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.586556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.586638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.586660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.586674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.586685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.586725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.596682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.596814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.596839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.596852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.596864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.596902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.606628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.606737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.606772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.606786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.606798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.606836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.616765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.616857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.616894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.616908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.616920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.616949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.626725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.626863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.626890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.626910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.626923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.626951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.636795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.636928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.636954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.636968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.636980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.637007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.646684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.646810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.646834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.646848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.646859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.646886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.656765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.656842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.656864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.656878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.656890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.656927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.666760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.666844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.666867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.666880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.666891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.666931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.676794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.676928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.676952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.676965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.676977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.677005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.686831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.686912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.686935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.686949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.686961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.686987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.696850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.696932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.696958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.696974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.696985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.697013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.706861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.706988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.707012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.707026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.707038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.707065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.716916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.717001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.717029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.717044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.717056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.717083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.726945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.727040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.727063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.727089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.727101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.727127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.736974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.737054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.737079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.737093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.737104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.737133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.747067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.747145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.747168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.747182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.747194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.747221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.757086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.757171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.757193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.757206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.757218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.757249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.767045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.767141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.767165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.767178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.767190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.767217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.777042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.777138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.777161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.777175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.777186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.777213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.787082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.787160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.787183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.787197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.320 [2024-07-23 10:55:00.787208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.320 [2024-07-23 10:55:00.787235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.320 qpair failed and we were unable to recover it. 00:34:12.320 [2024-07-23 10:55:00.797149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.320 [2024-07-23 10:55:00.797230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.320 [2024-07-23 10:55:00.797253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.320 [2024-07-23 10:55:00.797266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.321 [2024-07-23 10:55:00.797278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.321 [2024-07-23 10:55:00.797305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.321 qpair failed and we were unable to recover it. 00:34:12.321 [2024-07-23 10:55:00.807157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.321 [2024-07-23 10:55:00.807251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.321 [2024-07-23 10:55:00.807290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.321 [2024-07-23 10:55:00.807304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.321 [2024-07-23 10:55:00.807316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.321 [2024-07-23 10:55:00.807354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.321 qpair failed and we were unable to recover it. 00:34:12.321 [2024-07-23 10:55:00.817163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.321 [2024-07-23 10:55:00.817249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.321 [2024-07-23 10:55:00.817273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.321 [2024-07-23 10:55:00.817286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.321 [2024-07-23 10:55:00.817298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.321 [2024-07-23 10:55:00.817325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.321 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-23 10:55:00.827217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.582 [2024-07-23 10:55:00.827297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.582 [2024-07-23 10:55:00.827320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.582 [2024-07-23 10:55:00.827333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.582 [2024-07-23 10:55:00.827345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.582 [2024-07-23 10:55:00.827394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-23 10:55:00.837237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.582 [2024-07-23 10:55:00.837321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.582 [2024-07-23 10:55:00.837344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.582 [2024-07-23 10:55:00.837357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.582 [2024-07-23 10:55:00.837369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.582 [2024-07-23 10:55:00.837407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.582 qpair failed and we were unable to recover it. 00:34:12.582 [2024-07-23 10:55:00.847295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.582 [2024-07-23 10:55:00.847390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.582 [2024-07-23 10:55:00.847425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.582 [2024-07-23 10:55:00.847439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.847458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.847505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.857320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.857444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.857468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.857489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.857509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.857540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.867333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.867418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.867444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.867459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.867471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.867520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.877445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.877535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.877560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.877574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.877586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.877614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.887343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.887437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.887461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.887475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.887493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.887522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.897508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.897646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.897670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.897684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.897696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.897723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.907439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.907548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.907571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.907585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.907597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.907623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.917492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.917575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.917598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.917611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.917623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.917673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.927507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.927602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.927638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.927652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.927664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.927691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.937512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.937614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.937638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.937656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.937668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.937706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.947712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.947814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.947837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.947850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.947862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.947888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.957598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.957676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.957700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.957713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.957724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.957762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.967623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.967703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.967727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.967741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.583 [2024-07-23 10:55:00.967752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.583 [2024-07-23 10:55:00.967779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.583 qpair failed and we were unable to recover it. 00:34:12.583 [2024-07-23 10:55:00.977668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.583 [2024-07-23 10:55:00.977749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.583 [2024-07-23 10:55:00.977773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.583 [2024-07-23 10:55:00.977787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:00.977798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:00.977826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:00.987625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:00.987745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:00.987768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:00.987781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:00.987793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:00.987820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:00.997678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:00.997759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:00.997783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:00.997797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:00.997809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:00.997836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.007723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.007813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.007837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.007851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.007863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.007889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.017738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.017821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.017847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.017862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.017874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.017901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.027756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.027834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.027859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.027880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.027892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.027931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.037871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.037999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.038022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.038035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.038047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.038085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.047850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.047951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.047985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.047998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.048010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.048048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.057873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.057950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.057974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.057988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.058000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.058038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.067990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.068067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.068090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.068104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.068116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.068142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.584 [2024-07-23 10:55:01.077884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.584 [2024-07-23 10:55:01.077964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.584 [2024-07-23 10:55:01.077987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.584 [2024-07-23 10:55:01.078001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.584 [2024-07-23 10:55:01.078013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.584 [2024-07-23 10:55:01.078040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.584 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.087951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.088036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.088059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.088073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.088085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.088111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.097974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.098050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.098073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.098086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.098098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.098148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.107979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.108060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.108082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.108095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.108106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.108132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.117985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.118085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.118114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.118129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.118141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.118168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.128059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.128135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.128160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.128174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.128186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.128213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.138034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.138118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.138145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.138160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.138172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.138200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.148075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.148174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.148199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.148213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.148224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.148251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.158193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.158275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.158298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.158311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.158323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.158365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.168146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.844 [2024-07-23 10:55:01.168241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.844 [2024-07-23 10:55:01.168276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.844 [2024-07-23 10:55:01.168290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.844 [2024-07-23 10:55:01.168302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.844 [2024-07-23 10:55:01.168340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.844 qpair failed and we were unable to recover it. 00:34:12.844 [2024-07-23 10:55:01.178187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.178264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.178287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.178300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.178311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.178362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.188190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.188266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.188288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.188302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.188313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.188351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.198240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.198330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.198356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.198371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.198382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.198421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.208390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.208493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.208532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.208547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.208558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.208585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.218319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.218418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.218440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.218453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.218464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.218509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.228385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.228463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.228493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.228508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.228520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.228558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.238349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.238468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.238496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.238510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.238522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.238549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.248356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.248436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.248458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.248472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.248497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.248526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.258409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.258489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.258513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.258526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.258538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.258588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.268397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.268484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.268508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.268522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.268534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.268561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.278514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.278602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.278625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.278639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.278651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.278678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.288490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.288579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.288605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.288621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.288633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.288661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.298487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.298572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.845 [2024-07-23 10:55:01.298595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.845 [2024-07-23 10:55:01.298609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.845 [2024-07-23 10:55:01.298621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.845 [2024-07-23 10:55:01.298648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.845 qpair failed and we were unable to recover it. 00:34:12.845 [2024-07-23 10:55:01.308584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.845 [2024-07-23 10:55:01.308677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.846 [2024-07-23 10:55:01.308700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.846 [2024-07-23 10:55:01.308713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.846 [2024-07-23 10:55:01.308724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.846 [2024-07-23 10:55:01.308750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.846 qpair failed and we were unable to recover it. 00:34:12.846 [2024-07-23 10:55:01.318564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.846 [2024-07-23 10:55:01.318644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.846 [2024-07-23 10:55:01.318666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.846 [2024-07-23 10:55:01.318679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.846 [2024-07-23 10:55:01.318690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.846 [2024-07-23 10:55:01.318729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.846 qpair failed and we were unable to recover it. 00:34:12.846 [2024-07-23 10:55:01.328569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.846 [2024-07-23 10:55:01.328653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.846 [2024-07-23 10:55:01.328676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.846 [2024-07-23 10:55:01.328690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.846 [2024-07-23 10:55:01.328702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.846 [2024-07-23 10:55:01.328729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.846 qpair failed and we were unable to recover it. 00:34:12.846 [2024-07-23 10:55:01.338631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.846 [2024-07-23 10:55:01.338716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.846 [2024-07-23 10:55:01.338740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.846 [2024-07-23 10:55:01.338753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.846 [2024-07-23 10:55:01.338770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:12.846 [2024-07-23 10:55:01.338798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.846 qpair failed and we were unable to recover it. 00:34:13.105 [2024-07-23 10:55:01.348648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.105 [2024-07-23 10:55:01.348729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.105 [2024-07-23 10:55:01.348753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.105 [2024-07-23 10:55:01.348766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.105 [2024-07-23 10:55:01.348778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.105 [2024-07-23 10:55:01.348805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.105 qpair failed and we were unable to recover it. 00:34:13.105 [2024-07-23 10:55:01.358706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.105 [2024-07-23 10:55:01.358789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.105 [2024-07-23 10:55:01.358814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.105 [2024-07-23 10:55:01.358828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.105 [2024-07-23 10:55:01.358840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.105 [2024-07-23 10:55:01.358866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.105 qpair failed and we were unable to recover it. 00:34:13.105 [2024-07-23 10:55:01.368679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.105 [2024-07-23 10:55:01.368768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.105 [2024-07-23 10:55:01.368792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.105 [2024-07-23 10:55:01.368806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.105 [2024-07-23 10:55:01.368818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.105 [2024-07-23 10:55:01.368846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.105 qpair failed and we were unable to recover it. 00:34:13.105 [2024-07-23 10:55:01.378752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.105 [2024-07-23 10:55:01.378835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.105 [2024-07-23 10:55:01.378859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.105 [2024-07-23 10:55:01.378872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.105 [2024-07-23 10:55:01.378884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.105 [2024-07-23 10:55:01.378935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.105 qpair failed and we were unable to recover it. 00:34:13.105 [2024-07-23 10:55:01.388731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.105 [2024-07-23 10:55:01.388819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.105 [2024-07-23 10:55:01.388843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.105 [2024-07-23 10:55:01.388857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.105 [2024-07-23 10:55:01.388869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.105 [2024-07-23 10:55:01.388896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.105 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.398795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.398882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.398907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.398922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.398933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.398961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.408832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.408925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.408960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.408975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.408986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.409024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.418829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.418915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.418938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.418951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.418963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.419001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.428902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.428990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.429013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.429031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.429043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.429069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.438899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.439005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.439027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.439041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.439052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.439078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.448985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.449062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.449085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.449098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.449110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.449137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.458927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.459011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.459034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.459048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.459060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.459087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.469014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.469113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.469136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.469149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.469161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.469210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.479112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.479194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.479218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.479231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.479244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.479270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.489012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.489092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.489115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.489129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.489141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.489179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.499055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.499133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.499157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.499171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.499183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.499212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.509086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.509201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.509227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.509242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.509254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.509280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.519140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.519240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.106 [2024-07-23 10:55:01.519267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.106 [2024-07-23 10:55:01.519281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.106 [2024-07-23 10:55:01.519292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.106 [2024-07-23 10:55:01.519318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.106 qpair failed and we were unable to recover it. 00:34:13.106 [2024-07-23 10:55:01.529166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.106 [2024-07-23 10:55:01.529264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.529299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.529313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.529325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.529362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.539149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.539244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.539270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.539285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.539297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.539336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.549198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.549271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.549294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.549308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.549319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.549359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.559246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.559331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.559354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.559367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.559379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.559421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.569274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.569365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.569389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.569419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.569431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.569457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.579277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.579351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.579374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.579387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.579399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.579425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.589304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.589401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.589426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.589439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.589451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.589478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.107 [2024-07-23 10:55:01.599319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.107 [2024-07-23 10:55:01.599401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.107 [2024-07-23 10:55:01.599425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.107 [2024-07-23 10:55:01.599440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.107 [2024-07-23 10:55:01.599451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.107 [2024-07-23 10:55:01.599485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.107 qpair failed and we were unable to recover it. 00:34:13.366 [2024-07-23 10:55:01.609376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.366 [2024-07-23 10:55:01.609484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.366 [2024-07-23 10:55:01.609522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.366 [2024-07-23 10:55:01.609538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.366 [2024-07-23 10:55:01.609550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.609579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.619359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.619440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.619467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.619492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.619508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.619542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.629473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.629570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.629595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.629609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.629621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.629649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.639432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.639523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.639548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.639562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.639574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.639601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.649497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.649598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.649622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.649636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.649648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.649680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.659477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.659571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.659595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.659609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.659621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.659648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.669531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.669618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.669642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.669656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.669668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.669695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.679586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.679677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.679700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.679714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.679726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.679753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.689639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.689726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.689750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.689768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.689781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.689808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.699600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.699688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.699712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.699726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.699738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.699765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.709661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.709748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.709771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.709785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.709797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.709836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.719677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.719821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.719857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.719871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.719883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.719923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.729824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.729917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.729940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.729954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.729966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.729993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.367 [2024-07-23 10:55:01.739714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.367 [2024-07-23 10:55:01.739800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.367 [2024-07-23 10:55:01.739824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.367 [2024-07-23 10:55:01.739837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.367 [2024-07-23 10:55:01.739854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.367 [2024-07-23 10:55:01.739882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.367 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.749738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.749828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.749851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.749865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.749877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.749904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.759793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.759910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.759932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.759946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.759958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.759996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.769823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.769898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.769924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.769938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.769950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.769979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.779993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.780117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.780152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.780166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.780178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.780217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.789854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.789961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.789986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.789999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.790011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.790038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.799933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.800030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.800057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.800070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.800082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.800110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.809967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.810066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.810090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.810103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.810115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.810143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.819922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.820007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.820031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.820045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.820057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.820084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.829982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.830068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.830092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.830111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.830124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.830151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.839988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.840071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.840098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.840112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.840123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.840150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.850042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.850158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.850194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.850208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.850219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.850270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.368 [2024-07-23 10:55:01.860026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.368 [2024-07-23 10:55:01.860106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.368 [2024-07-23 10:55:01.860134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.368 [2024-07-23 10:55:01.860147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.368 [2024-07-23 10:55:01.860159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.368 [2024-07-23 10:55:01.860186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.368 qpair failed and we were unable to recover it. 00:34:13.629 [2024-07-23 10:55:01.870059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.629 [2024-07-23 10:55:01.870142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.629 [2024-07-23 10:55:01.870171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.629 [2024-07-23 10:55:01.870190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.629 [2024-07-23 10:55:01.870203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.629 [2024-07-23 10:55:01.870233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.629 qpair failed and we were unable to recover it. 00:34:13.629 [2024-07-23 10:55:01.880107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.629 [2024-07-23 10:55:01.880230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.629 [2024-07-23 10:55:01.880256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.629 [2024-07-23 10:55:01.880272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.629 [2024-07-23 10:55:01.880283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.629 [2024-07-23 10:55:01.880311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.629 qpair failed and we were unable to recover it. 00:34:13.629 [2024-07-23 10:55:01.890123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.629 [2024-07-23 10:55:01.890251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.629 [2024-07-23 10:55:01.890276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.629 [2024-07-23 10:55:01.890289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.629 [2024-07-23 10:55:01.890301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.629 [2024-07-23 10:55:01.890329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.629 qpair failed and we were unable to recover it. 00:34:13.629 [2024-07-23 10:55:01.900188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.629 [2024-07-23 10:55:01.900276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.629 [2024-07-23 10:55:01.900299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.629 [2024-07-23 10:55:01.900313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.900324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.900352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.910219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.910317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.910342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.910356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.910368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.910396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.920253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.920339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.920367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.920381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.920393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.920420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.930262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.930394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.930421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.930437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.930449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.930477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.940245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.940325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.940360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.940373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.940384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.940423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.950276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.950355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.950378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.950391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.950403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.950430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.960346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.960509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.960533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.960547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.960570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.960605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.970475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.970576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.970603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.970616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.970628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.970655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.980362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.980449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.980471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.980492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.980505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.980532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:01.990398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:01.990518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:01.990540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:01.990554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:01.990566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:01.990593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:02.000492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:02.000621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:02.000657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:02.000670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:02.000682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:02.000721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:02.010465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:02.010562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:02.010590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:02.010617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:02.010628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:02.010657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:02.020475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:02.020559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:02.020581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:02.020594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.630 [2024-07-23 10:55:02.020606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.630 [2024-07-23 10:55:02.020633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.630 qpair failed and we were unable to recover it. 00:34:13.630 [2024-07-23 10:55:02.030497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.630 [2024-07-23 10:55:02.030577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.630 [2024-07-23 10:55:02.030601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.630 [2024-07-23 10:55:02.030614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.631 [2024-07-23 10:55:02.030626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.631 [2024-07-23 10:55:02.030653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.631 qpair failed and we were unable to recover it. 00:34:13.631 [2024-07-23 10:55:02.040601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.631 [2024-07-23 10:55:02.040693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.631 [2024-07-23 10:55:02.040717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.631 [2024-07-23 10:55:02.040731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.631 [2024-07-23 10:55:02.040743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb6e8000b90 00:34:13.631 [2024-07-23 10:55:02.040782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.631 qpair failed and we were unable to recover it. 00:34:13.631 [2024-07-23 10:55:02.040820] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:13.631 A controller has encountered a failure and is being reset. 00:34:13.631 Controller properly reset. 00:34:16.916 Initializing NVMe Controllers 00:34:16.916 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:16.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:16.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:16.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:16.916 Initialization complete. Launching workers. 00:34:16.916 Starting thread on core 1 00:34:16.916 Starting thread on core 2 00:34:16.916 Starting thread on core 3 00:34:16.916 Starting thread on core 0 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:16.916 00:34:16.916 real 0m10.692s 00:34:16.916 user 0m26.352s 00:34:16.916 sys 0m6.038s 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.916 ************************************ 00:34:16.916 END TEST nvmf_target_disconnect_tc2 00:34:16.916 ************************************ 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:16.916 10:55:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:16.916 rmmod nvme_tcp 00:34:16.916 rmmod nvme_fabrics 00:34:16.916 rmmod nvme_keyring 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3951665 ']' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3951665 ']' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3951665' 00:34:16.916 killing process with pid 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3951665 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:16.916 10:55:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.820 10:55:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:18.820 00:34:18.820 real 0m15.044s 00:34:18.820 user 0m51.509s 00:34:18.820 sys 0m8.107s 00:34:18.820 10:55:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:18.820 10:55:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.820 ************************************ 00:34:18.820 END TEST nvmf_target_disconnect 00:34:18.820 ************************************ 00:34:18.820 10:55:07 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:18.820 10:55:07 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.820 10:55:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.820 10:55:07 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:18.820 00:34:18.820 real 27m13.232s 00:34:18.820 user 75m31.717s 00:34:18.820 sys 6m4.231s 00:34:18.820 10:55:07 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:18.820 10:55:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.820 ************************************ 00:34:18.820 END TEST nvmf_tcp 00:34:18.820 ************************************ 00:34:19.079 10:55:07 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:19.079 10:55:07 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:19.079 10:55:07 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:19.079 10:55:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:19.079 10:55:07 -- common/autotest_common.sh@10 -- # set +x 00:34:19.079 ************************************ 00:34:19.079 START TEST spdkcli_nvmf_tcp 00:34:19.079 ************************************ 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:19.079 * Looking for test storage... 00:34:19.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3952602 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3952602 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3952602 ']' 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:19.079 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.079 [2024-07-23 10:55:07.483067] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:19.079 [2024-07-23 10:55:07.483143] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952602 ] 00:34:19.079 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.079 [2024-07-23 10:55:07.542924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:19.338 [2024-07-23 10:55:07.631447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.338 [2024-07-23 10:55:07.631460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.338 10:55:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:19.338 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:19.338 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:19.338 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:19.338 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:19.338 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:19.338 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:19.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:19.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:19.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:19.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:19.338 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:19.338 ' 00:34:21.875 [2024-07-23 10:55:10.290660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.251 [2024-07-23 10:55:11.530781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:25.786 [2024-07-23 10:55:13.837852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:27.694 [2024-07-23 10:55:15.852183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:29.103 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:29.103 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:29.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:29.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:29.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:29.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:29.103 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:29.103 10:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.670 10:55:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:29.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:29.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:29.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:29.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:29.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:29.670 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:29.670 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:29.670 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:29.670 ' 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:34.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:34.943 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:34.943 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:34.943 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:34.943 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:34.943 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:34.943 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:34.944 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:34.944 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3952602 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3952602 ']' 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3952602 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3952602 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3952602' 00:34:34.944 killing process with pid 3952602 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3952602 00:34:34.944 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3952602 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3952602 ']' 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3952602 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3952602 ']' 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3952602 00:34:35.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3952602) - No such process 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3952602 is not found' 00:34:35.203 Process with pid 3952602 is not found 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:35.203 00:34:35.203 real 0m16.108s 00:34:35.203 user 0m34.286s 00:34:35.203 sys 0m0.818s 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:35.203 10:55:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.203 ************************************ 00:34:35.203 END TEST spdkcli_nvmf_tcp 00:34:35.203 ************************************ 00:34:35.203 10:55:23 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:35.203 10:55:23 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:35.203 10:55:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:35.203 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:34:35.203 ************************************ 00:34:35.203 START TEST nvmf_identify_passthru 00:34:35.203 ************************************ 00:34:35.203 10:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:35.203 * Looking for test storage... 00:34:35.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.203 10:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.203 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.203 10:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.203 10:55:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.203 10:55:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.204 10:55:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.204 10:55:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.204 10:55:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:35.204 10:55:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.204 10:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.204 10:55:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:35.204 10:55:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:35.204 10:55:23 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:35.204 10:55:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:37.111 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:37.111 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:37.111 Found net devices under 0000:08:00.0: cvl_0_0 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:37.111 Found net devices under 0000:08:00.1: cvl_0_1 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.111 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:37.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:34:37.111 00:34:37.112 --- 10.0.0.2 ping statistics --- 00:34:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.112 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:34:37.112 00:34:37.112 --- 10.0.0.1 ping statistics --- 00:34:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.112 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:37.112 10:55:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:34:37.112 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:84:00.0 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:37.112 10:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:37.112 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.305 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:34:41.305 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:34:41.305 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:41.305 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:41.305 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3956061 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:45.496 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3956061 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3956061 ']' 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:45.496 10:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.496 [2024-07-23 10:55:33.802928] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:45.496 [2024-07-23 10:55:33.803026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.496 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.496 [2024-07-23 10:55:33.872746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:45.496 [2024-07-23 10:55:33.963023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.496 [2024-07-23 10:55:33.963086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.496 [2024-07-23 10:55:33.963102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.496 [2024-07-23 10:55:33.963115] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.496 [2024-07-23 10:55:33.963127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.496 [2024-07-23 10:55:33.964507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.496 [2024-07-23 10:55:33.964592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.496 [2024-07-23 10:55:33.964674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:45.496 [2024-07-23 10:55:33.964704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:45.755 10:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.755 INFO: Log level set to 20 00:34:45.755 INFO: Requests: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "method": "nvmf_set_config", 00:34:45.755 "id": 1, 00:34:45.755 "params": { 00:34:45.755 "admin_cmd_passthru": { 00:34:45.755 "identify_ctrlr": true 00:34:45.755 } 00:34:45.755 } 00:34:45.755 } 00:34:45.755 00:34:45.755 INFO: response: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "id": 1, 00:34:45.755 "result": true 00:34:45.755 } 00:34:45.755 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.755 10:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.755 INFO: Setting log level to 20 00:34:45.755 INFO: Setting log level to 20 00:34:45.755 INFO: Log level set to 20 00:34:45.755 INFO: Log level set to 20 00:34:45.755 INFO: Requests: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "method": "framework_start_init", 00:34:45.755 "id": 1 00:34:45.755 } 00:34:45.755 00:34:45.755 INFO: Requests: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "method": "framework_start_init", 00:34:45.755 "id": 1 00:34:45.755 } 00:34:45.755 00:34:45.755 [2024-07-23 10:55:34.182572] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:45.755 INFO: response: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "id": 1, 00:34:45.755 "result": true 00:34:45.755 } 00:34:45.755 00:34:45.755 INFO: response: 00:34:45.755 { 00:34:45.755 "jsonrpc": "2.0", 00:34:45.755 "id": 1, 00:34:45.755 "result": true 00:34:45.755 } 00:34:45.755 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.755 10:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.755 INFO: Setting log level to 40 00:34:45.755 INFO: Setting log level to 40 00:34:45.755 INFO: Setting log level to 40 00:34:45.755 [2024-07-23 10:55:34.192423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.755 10:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.755 10:55:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.755 10:55:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.041 Nvme0n1 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.041 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.041 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.041 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.041 [2024-07-23 10:55:37.064086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.041 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.041 [ 00:34:49.041 { 00:34:49.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:49.041 "subtype": "Discovery", 00:34:49.041 "listen_addresses": [], 00:34:49.041 "allow_any_host": true, 00:34:49.041 "hosts": [] 00:34:49.041 }, 00:34:49.041 { 00:34:49.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.041 "subtype": "NVMe", 00:34:49.041 "listen_addresses": [ 00:34:49.041 { 00:34:49.041 "trtype": "TCP", 00:34:49.041 "adrfam": "IPv4", 00:34:49.041 "traddr": "10.0.0.2", 00:34:49.041 "trsvcid": "4420" 00:34:49.041 } 00:34:49.041 ], 00:34:49.041 "allow_any_host": true, 00:34:49.041 "hosts": [], 00:34:49.041 "serial_number": "SPDK00000000000001", 00:34:49.041 "model_number": "SPDK bdev Controller", 00:34:49.041 "max_namespaces": 1, 00:34:49.041 "min_cntlid": 1, 00:34:49.041 "max_cntlid": 65519, 00:34:49.041 "namespaces": [ 00:34:49.041 { 00:34:49.041 "nsid": 1, 00:34:49.041 "bdev_name": "Nvme0n1", 00:34:49.041 "name": "Nvme0n1", 00:34:49.041 "nguid": "C024F8F899D44C4398D6938212F95CF8", 00:34:49.041 "uuid": "c024f8f8-99d4-4c43-98d6-938212f95cf8" 00:34:49.041 } 00:34:49.041 ] 00:34:49.041 } 00:34:49.041 ] 00:34:49.041 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:49.042 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:49.042 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:49.042 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:49.301 10:55:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:49.301 rmmod nvme_tcp 00:34:49.301 rmmod nvme_fabrics 00:34:49.301 rmmod nvme_keyring 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3956061 ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3956061 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3956061 ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3956061 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3956061 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3956061' 00:34:49.301 killing process with pid 3956061 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3956061 00:34:49.301 10:55:37 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3956061 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:51.203 10:55:39 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.203 10:55:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.203 10:55:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.109 10:55:41 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:53.109 00:34:53.109 real 0m17.767s 00:34:53.109 user 0m27.400s 00:34:53.109 sys 0m2.088s 00:34:53.109 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:53.109 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.109 ************************************ 00:34:53.109 END TEST nvmf_identify_passthru 00:34:53.109 ************************************ 00:34:53.109 10:55:41 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:53.109 10:55:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:53.109 10:55:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:53.109 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:34:53.109 ************************************ 00:34:53.109 START TEST nvmf_dif 00:34:53.109 ************************************ 00:34:53.109 10:55:41 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:53.109 * Looking for test storage... 00:34:53.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.109 10:55:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.109 10:55:41 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.109 10:55:41 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.109 10:55:41 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.109 10:55:41 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.109 10:55:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.109 10:55:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.109 10:55:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.109 10:55:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:53.109 10:55:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:53.110 10:55:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:53.110 10:55:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:53.110 10:55:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:53.110 10:55:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:53.110 10:55:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.110 10:55:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.110 10:55:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:53.110 10:55:41 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:53.110 10:55:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:54.482 10:55:42 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:54.483 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:54.483 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:54.483 Found net devices under 0000:08:00.0: cvl_0_0 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:54.483 Found net devices under 0000:08:00.1: cvl_0_1 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.483 10:55:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:54.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:34:54.742 00:34:54.742 --- 10.0.0.2 ping statistics --- 00:34:54.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.742 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:34:54.742 00:34:54.742 --- 10.0.0.1 ping statistics --- 00:34:54.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.742 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:54.742 10:55:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.692 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:34:55.692 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:55.692 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:34:55.692 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:34:55.692 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:34:55.692 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:34:55.692 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:34:55.692 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:34:55.692 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:34:55.692 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:34:55.692 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:34:55.692 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:34:55.692 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:34:55.692 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:34:55.692 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:34:55.692 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:34:55.692 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:55.692 10:55:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:55.692 10:55:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3958583 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:55.692 10:55:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3958583 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3958583 ']' 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.692 10:55:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.692 [2024-07-23 10:55:44.184721] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:55.692 [2024-07-23 10:55:44.184815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.950 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.950 [2024-07-23 10:55:44.249798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.950 [2024-07-23 10:55:44.336135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.950 [2024-07-23 10:55:44.336199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.950 [2024-07-23 10:55:44.336215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.950 [2024-07-23 10:55:44.336228] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.950 [2024-07-23 10:55:44.336240] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.950 [2024-07-23 10:55:44.336270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.950 10:55:44 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.950 10:55:44 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:34:55.950 10:55:44 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:55.950 10:55:44 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.950 10:55:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:56.208 10:55:44 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.208 10:55:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:56.208 10:55:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:56.208 [2024-07-23 10:55:44.458394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.208 10:55:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:56.208 10:55:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:56.208 ************************************ 00:34:56.208 START TEST fio_dif_1_default 00:34:56.208 ************************************ 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:56.208 bdev_null0 00:34:56.208 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:56.209 [2024-07-23 10:55:44.514655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:56.209 { 00:34:56.209 "params": { 00:34:56.209 "name": "Nvme$subsystem", 00:34:56.209 "trtype": "$TEST_TRANSPORT", 00:34:56.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.209 "adrfam": "ipv4", 00:34:56.209 "trsvcid": "$NVMF_PORT", 00:34:56.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.209 "hdgst": ${hdgst:-false}, 00:34:56.209 "ddgst": ${ddgst:-false} 00:34:56.209 }, 00:34:56.209 "method": "bdev_nvme_attach_controller" 00:34:56.209 } 00:34:56.209 EOF 00:34:56.209 )") 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:56.209 "params": { 00:34:56.209 "name": "Nvme0", 00:34:56.209 "trtype": "tcp", 00:34:56.209 "traddr": "10.0.0.2", 00:34:56.209 "adrfam": "ipv4", 00:34:56.209 "trsvcid": "4420", 00:34:56.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.209 "hdgst": false, 00:34:56.209 "ddgst": false 00:34:56.209 }, 00:34:56.209 "method": "bdev_nvme_attach_controller" 00:34:56.209 }' 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:56.209 10:55:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:56.477 fio-3.35 00:34:56.477 Starting 1 thread 00:34:56.477 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.674 00:35:08.674 filename0: (groupid=0, jobs=1): err= 0: pid=3958759: Tue Jul 23 10:55:55 2024 00:35:08.674 read: IOPS=186, BW=748KiB/s (766kB/s)(7488KiB/10013msec) 00:35:08.674 slat (nsec): min=7363, max=65742, avg=8901.69, stdev=2958.93 00:35:08.674 clat (usec): min=584, max=48080, avg=21366.32, stdev=20619.20 00:35:08.674 lat (usec): min=592, max=48122, avg=21375.22, stdev=20618.95 00:35:08.674 clat percentiles (usec): 00:35:08.674 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 652], 00:35:08.674 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[41157], 60.00th=[41157], 00:35:08.674 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:08.674 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:35:08.674 | 99.99th=[47973] 00:35:08.674 bw ( KiB/s): min= 672, max= 768, per=99.89%, avg=747.20, stdev=31.62, samples=20 00:35:08.674 iops : min= 168, max= 192, avg=186.80, stdev= 7.90, samples=20 00:35:08.674 lat (usec) : 750=46.63%, 1000=3.15% 00:35:08.674 lat (msec) : 50=50.21% 00:35:08.674 cpu : usr=90.26%, sys=9.36%, ctx=12, majf=0, minf=182 00:35:08.674 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.674 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.674 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.674 00:35:08.674 Run status group 0 (all jobs): 00:35:08.674 READ: bw=748KiB/s (766kB/s), 748KiB/s-748KiB/s (766kB/s-766kB/s), io=7488KiB (7668kB), run=10013-10013msec 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 00:35:08.674 real 0m10.954s 00:35:08.674 user 0m9.836s 00:35:08.674 sys 0m1.174s 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 ************************************ 00:35:08.674 END TEST fio_dif_1_default 00:35:08.674 ************************************ 00:35:08.674 10:55:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:08.674 10:55:55 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:08.674 10:55:55 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 ************************************ 00:35:08.674 START TEST fio_dif_1_multi_subsystems 00:35:08.674 ************************************ 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 bdev_null0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 [2024-07-23 10:55:55.516715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 bdev_null1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.674 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.675 { 00:35:08.675 "params": { 00:35:08.675 "name": "Nvme$subsystem", 00:35:08.675 "trtype": "$TEST_TRANSPORT", 00:35:08.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.675 "adrfam": "ipv4", 00:35:08.675 "trsvcid": "$NVMF_PORT", 00:35:08.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.675 "hdgst": ${hdgst:-false}, 00:35:08.675 "ddgst": ${ddgst:-false} 00:35:08.675 }, 00:35:08.675 "method": "bdev_nvme_attach_controller" 00:35:08.675 } 00:35:08.675 EOF 00:35:08.675 )") 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.675 { 00:35:08.675 "params": { 00:35:08.675 "name": "Nvme$subsystem", 00:35:08.675 "trtype": "$TEST_TRANSPORT", 00:35:08.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.675 "adrfam": "ipv4", 00:35:08.675 "trsvcid": "$NVMF_PORT", 00:35:08.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.675 "hdgst": ${hdgst:-false}, 00:35:08.675 "ddgst": ${ddgst:-false} 00:35:08.675 }, 00:35:08.675 "method": "bdev_nvme_attach_controller" 00:35:08.675 } 00:35:08.675 EOF 00:35:08.675 )") 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:08.675 "params": { 00:35:08.675 "name": "Nvme0", 00:35:08.675 "trtype": "tcp", 00:35:08.675 "traddr": "10.0.0.2", 00:35:08.675 "adrfam": "ipv4", 00:35:08.675 "trsvcid": "4420", 00:35:08.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.675 "hdgst": false, 00:35:08.675 "ddgst": false 00:35:08.675 }, 00:35:08.675 "method": "bdev_nvme_attach_controller" 00:35:08.675 },{ 00:35:08.675 "params": { 00:35:08.675 "name": "Nvme1", 00:35:08.675 "trtype": "tcp", 00:35:08.675 "traddr": "10.0.0.2", 00:35:08.675 "adrfam": "ipv4", 00:35:08.675 "trsvcid": "4420", 00:35:08.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:08.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:08.675 "hdgst": false, 00:35:08.675 "ddgst": false 00:35:08.675 }, 00:35:08.675 "method": "bdev_nvme_attach_controller" 00:35:08.675 }' 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.675 10:55:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.675 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.675 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.675 fio-3.35 00:35:08.675 Starting 2 threads 00:35:08.675 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.637 00:35:18.637 filename0: (groupid=0, jobs=1): err= 0: pid=3959828: Tue Jul 23 10:56:06 2024 00:35:18.637 read: IOPS=131, BW=525KiB/s (538kB/s)(5264KiB/10019msec) 00:35:18.637 slat (nsec): min=7406, max=52922, avg=9193.74, stdev=2814.50 00:35:18.637 clat (usec): min=629, max=43315, avg=30422.42, stdev=17910.12 00:35:18.637 lat (usec): min=636, max=43326, avg=30431.61, stdev=17910.33 00:35:18.637 clat percentiles (usec): 00:35:18.637 | 1.00th=[ 660], 5.00th=[ 709], 10.00th=[ 775], 20.00th=[ 873], 00:35:18.637 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:18.637 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:18.637 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:18.637 | 99.99th=[43254] 00:35:18.637 bw ( KiB/s): min= 384, max= 768, per=49.94%, avg=524.80, stdev=140.98, samples=20 00:35:18.637 iops : min= 96, max= 192, avg=131.20, stdev=35.25, samples=20 00:35:18.637 lat (usec) : 750=8.36%, 1000=17.78% 00:35:18.637 lat (msec) : 2=0.61%, 50=73.25% 00:35:18.637 cpu : usr=94.04%, sys=5.59%, ctx=14, majf=0, minf=117 00:35:18.637 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.637 issued rwts: total=1316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.637 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:18.637 filename1: (groupid=0, jobs=1): err= 0: pid=3959829: Tue Jul 23 10:56:06 2024 00:35:18.637 read: IOPS=130, BW=524KiB/s (536kB/s)(5248KiB/10018msec) 00:35:18.637 slat (nsec): min=7380, max=71850, avg=9245.51, stdev=3545.30 00:35:18.637 clat (usec): min=598, max=42991, avg=30511.99, stdev=17940.02 00:35:18.637 lat (usec): min=605, max=43004, avg=30521.24, stdev=17940.23 00:35:18.637 clat percentiles (usec): 00:35:18.637 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 685], 00:35:18.637 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:18.637 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:18.637 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:18.637 | 99.99th=[42730] 00:35:18.637 bw ( KiB/s): min= 384, max= 704, per=49.85%, avg=523.20, stdev=134.68, samples=20 00:35:18.637 iops : min= 96, max= 176, avg=130.80, stdev=33.67, samples=20 00:35:18.637 lat (usec) : 750=26.22%, 1000=0.30% 00:35:18.637 lat (msec) : 50=73.48% 00:35:18.637 cpu : usr=93.76%, sys=5.86%, ctx=14, majf=0, minf=166 00:35:18.637 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.637 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.637 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:18.637 00:35:18.637 Run status group 0 (all jobs): 00:35:18.637 READ: bw=1049KiB/s (1074kB/s), 524KiB/s-525KiB/s (536kB/s-538kB/s), io=10.3MiB (10.8MB), run=10018-10019msec 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 00:35:18.637 real 0m11.350s 00:35:18.637 user 0m19.932s 00:35:18.637 sys 0m1.444s 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 ************************************ 00:35:18.637 END TEST fio_dif_1_multi_subsystems 00:35:18.637 ************************************ 00:35:18.637 10:56:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:18.637 10:56:06 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:18.637 10:56:06 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 ************************************ 00:35:18.637 START TEST fio_dif_rand_params 00:35:18.637 ************************************ 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 bdev_null0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.637 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.638 [2024-07-23 10:56:06.910857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:18.638 { 00:35:18.638 "params": { 00:35:18.638 "name": "Nvme$subsystem", 00:35:18.638 "trtype": "$TEST_TRANSPORT", 00:35:18.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.638 "adrfam": "ipv4", 00:35:18.638 "trsvcid": "$NVMF_PORT", 00:35:18.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.638 "hdgst": ${hdgst:-false}, 00:35:18.638 "ddgst": ${ddgst:-false} 00:35:18.638 }, 00:35:18.638 "method": "bdev_nvme_attach_controller" 00:35:18.638 } 00:35:18.638 EOF 00:35:18.638 )") 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:18.638 "params": { 00:35:18.638 "name": "Nvme0", 00:35:18.638 "trtype": "tcp", 00:35:18.638 "traddr": "10.0.0.2", 00:35:18.638 "adrfam": "ipv4", 00:35:18.638 "trsvcid": "4420", 00:35:18.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.638 "hdgst": false, 00:35:18.638 "ddgst": false 00:35:18.638 }, 00:35:18.638 "method": "bdev_nvme_attach_controller" 00:35:18.638 }' 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:18.638 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.898 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:18.898 ... 00:35:18.898 fio-3.35 00:35:18.898 Starting 3 threads 00:35:18.898 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.459 00:35:25.459 filename0: (groupid=0, jobs=1): err= 0: pid=3960890: Tue Jul 23 10:56:12 2024 00:35:25.459 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(115MiB/5043msec) 00:35:25.459 slat (nsec): min=6544, max=33098, avg=13895.70, stdev=3149.02 00:35:25.459 clat (usec): min=4441, max=94552, avg=16452.11, stdev=13570.78 00:35:25.459 lat (usec): min=4452, max=94569, avg=16466.01, stdev=13570.73 00:35:25.459 clat percentiles (usec): 00:35:25.459 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 8225], 20.00th=[ 9634], 00:35:25.459 | 30.00th=[10159], 40.00th=[11469], 50.00th=[13042], 60.00th=[14091], 00:35:25.459 | 70.00th=[14746], 80.00th=[15664], 90.00th=[46400], 95.00th=[53740], 00:35:25.459 | 99.00th=[56886], 99.50th=[88605], 99.90th=[94897], 99.95th=[94897], 00:35:25.459 | 99.99th=[94897] 00:35:25.459 bw ( KiB/s): min=14848, max=32512, per=33.00%, avg=23398.40, stdev=5465.60, samples=10 00:35:25.459 iops : min= 116, max= 254, avg=182.80, stdev=42.70, samples=10 00:35:25.459 lat (msec) : 10=27.84%, 20=61.24%, 50=3.82%, 100=7.10% 00:35:25.459 cpu : usr=95.02%, sys=4.58%, ctx=13, majf=0, minf=90 00:35:25.459 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.459 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.459 filename0: (groupid=0, jobs=1): err= 0: pid=3960891: Tue Jul 23 10:56:12 2024 00:35:25.459 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5035msec) 00:35:25.459 slat (nsec): min=6341, max=88940, avg=13651.97, stdev=4032.51 00:35:25.459 clat (usec): min=4745, max=57583, avg=16224.40, stdev=12467.08 00:35:25.459 lat (usec): min=4755, max=57597, avg=16238.05, stdev=12466.64 00:35:25.459 clat percentiles (usec): 00:35:25.459 | 1.00th=[ 5735], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[ 9503], 00:35:25.459 | 30.00th=[10159], 40.00th=[11600], 50.00th=[12780], 60.00th=[13698], 00:35:25.459 | 70.00th=[14353], 80.00th=[15533], 90.00th=[46924], 95.00th=[50594], 00:35:25.459 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:35:25.459 | 99.99th=[57410] 00:35:25.459 bw ( KiB/s): min=19200, max=31232, per=33.48%, avg=23735.70, stdev=3931.33, samples=10 00:35:25.459 iops : min= 150, max= 244, avg=185.40, stdev=30.73, samples=10 00:35:25.459 lat (msec) : 10=27.53%, 20=61.40%, 50=5.70%, 100=5.38% 00:35:25.459 cpu : usr=94.72%, sys=4.89%, ctx=7, majf=0, minf=177 00:35:25.459 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 issued rwts: total=930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.459 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.459 filename0: (groupid=0, jobs=1): err= 0: pid=3960892: Tue Jul 23 10:56:12 2024 00:35:25.459 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(119MiB/5044msec) 00:35:25.459 slat (nsec): min=6763, max=74354, avg=19861.37, stdev=7299.53 00:35:25.459 clat (usec): min=5052, max=92194, avg=15891.96, stdev=13586.86 00:35:25.459 lat (usec): min=5064, max=92217, avg=15911.82, stdev=13587.16 00:35:25.459 clat percentiles (usec): 00:35:25.459 | 1.00th=[ 5473], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[ 9372], 00:35:25.459 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12256], 60.00th=[13042], 00:35:25.459 | 70.00th=[13566], 80.00th=[14091], 90.00th=[46924], 95.00th=[51643], 00:35:25.459 | 99.00th=[54264], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 00:35:25.459 | 99.99th=[91751] 00:35:25.459 bw ( KiB/s): min=15360, max=33024, per=34.12%, avg=24192.00, stdev=6109.83, samples=10 00:35:25.459 iops : min= 120, max= 258, avg=189.00, stdev=47.73, samples=10 00:35:25.459 lat (msec) : 10=31.75%, 20=57.28%, 50=3.90%, 100=7.07% 00:35:25.459 cpu : usr=90.07%, sys=6.86%, ctx=153, majf=0, minf=84 00:35:25.459 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.459 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.459 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.459 00:35:25.459 Run status group 0 (all jobs): 00:35:25.459 READ: bw=69.2MiB/s (72.6MB/s), 22.7MiB/s-23.5MiB/s (23.8MB/s-24.6MB/s), io=349MiB (366MB), run=5035-5044msec 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.459 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 bdev_null0 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 [2024-07-23 10:56:13.027239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 bdev_null1 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.459 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.460 bdev_null2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.460 { 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme$subsystem", 00:35:25.460 "trtype": "$TEST_TRANSPORT", 00:35:25.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "$NVMF_PORT", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.460 "hdgst": ${hdgst:-false}, 00:35:25.460 "ddgst": ${ddgst:-false} 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 } 00:35:25.460 EOF 00:35:25.460 )") 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.460 { 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme$subsystem", 00:35:25.460 "trtype": "$TEST_TRANSPORT", 00:35:25.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "$NVMF_PORT", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.460 "hdgst": ${hdgst:-false}, 00:35:25.460 "ddgst": ${ddgst:-false} 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 } 00:35:25.460 EOF 00:35:25.460 )") 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.460 { 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme$subsystem", 00:35:25.460 "trtype": "$TEST_TRANSPORT", 00:35:25.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "$NVMF_PORT", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.460 "hdgst": ${hdgst:-false}, 00:35:25.460 "ddgst": ${ddgst:-false} 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 } 00:35:25.460 EOF 00:35:25.460 )") 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme0", 00:35:25.460 "trtype": "tcp", 00:35:25.460 "traddr": "10.0.0.2", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "4420", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.460 "hdgst": false, 00:35:25.460 "ddgst": false 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 },{ 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme1", 00:35:25.460 "trtype": "tcp", 00:35:25.460 "traddr": "10.0.0.2", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "4420", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:25.460 "hdgst": false, 00:35:25.460 "ddgst": false 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 },{ 00:35:25.460 "params": { 00:35:25.460 "name": "Nvme2", 00:35:25.460 "trtype": "tcp", 00:35:25.460 "traddr": "10.0.0.2", 00:35:25.460 "adrfam": "ipv4", 00:35:25.460 "trsvcid": "4420", 00:35:25.460 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:25.460 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:25.460 "hdgst": false, 00:35:25.460 "ddgst": false 00:35:25.460 }, 00:35:25.460 "method": "bdev_nvme_attach_controller" 00:35:25.460 }' 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.460 10:56:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.460 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.460 ... 00:35:25.460 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.460 ... 00:35:25.460 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.460 ... 00:35:25.461 fio-3.35 00:35:25.461 Starting 24 threads 00:35:25.461 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.746 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961543: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10120msec) 00:35:37.747 slat (usec): min=5, max=130, avg=31.90, stdev=26.24 00:35:37.747 clat (msec): min=226, max=581, avg=358.85, stdev=55.61 00:35:37.747 lat (msec): min=226, max=581, avg=358.88, stdev=55.61 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 228], 5.00th=[ 251], 10.00th=[ 313], 20.00th=[ 342], 00:35:37.747 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.747 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 388], 95.00th=[ 405], 00:35:37.747 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.747 | 99.99th=[ 584] 00:35:37.747 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=181.89, stdev=60.15, samples=19 00:35:37.747 iops : min= 32, max= 64, avg=45.47, stdev=15.04, samples=19 00:35:37.747 lat (msec) : 250=3.57%, 500=92.86%, 750=3.57% 00:35:37.747 cpu : usr=98.45%, sys=1.13%, ctx=16, majf=0, minf=49 00:35:37.747 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961544: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=64, BW=258KiB/s (264kB/s)(2624KiB/10169msec) 00:35:37.747 slat (usec): min=9, max=134, avg=26.30, stdev=30.84 00:35:37.747 clat (msec): min=99, max=417, avg=246.16, stdev=50.14 00:35:37.747 lat (msec): min=99, max=417, avg=246.18, stdev=50.15 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 100], 5.00th=[ 178], 10.00th=[ 201], 20.00th=[ 215], 00:35:37.747 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:35:37.747 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 313], 95.00th=[ 338], 00:35:37.747 | 99.00th=[ 359], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:35:37.747 | 99.99th=[ 418] 00:35:37.747 bw ( KiB/s): min= 128, max= 384, per=5.54%, avg=256.00, stdev=57.10, samples=20 00:35:37.747 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:35:37.747 lat (msec) : 100=2.44%, 250=50.91%, 500=46.65% 00:35:37.747 cpu : usr=97.93%, sys=1.40%, ctx=69, majf=0, minf=114 00:35:37.747 IO depths : 1=0.9%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961545: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=62, BW=249KiB/s (255kB/s)(2536KiB/10167msec) 00:35:37.747 slat (usec): min=5, max=121, avg=20.52, stdev=20.85 00:35:37.747 clat (msec): min=106, max=397, avg=255.30, stdev=54.96 00:35:37.747 lat (msec): min=106, max=397, avg=255.32, stdev=54.97 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 107], 5.00th=[ 121], 10.00th=[ 197], 20.00th=[ 228], 00:35:37.747 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:35:37.747 | 70.00th=[ 268], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 347], 00:35:37.747 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:35:37.747 | 99.99th=[ 397] 00:35:37.747 bw ( KiB/s): min= 128, max= 304, per=5.35%, avg=247.20, stdev=32.62, samples=20 00:35:37.747 iops : min= 32, max= 76, avg=61.80, stdev= 8.15, samples=20 00:35:37.747 lat (msec) : 250=54.26%, 500=45.74% 00:35:37.747 cpu : usr=98.22%, sys=1.20%, ctx=33, majf=0, minf=44 00:35:37.747 IO depths : 1=1.1%, 2=2.5%, 4=10.1%, 8=74.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961546: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=51, BW=207KiB/s (212kB/s)(2096KiB/10147msec) 00:35:37.747 slat (usec): min=8, max=153, avg=40.75, stdev=32.94 00:35:37.747 clat (msec): min=179, max=503, avg=307.41, stdev=58.41 00:35:37.747 lat (msec): min=179, max=503, avg=307.45, stdev=58.43 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 201], 5.00th=[ 224], 10.00th=[ 230], 20.00th=[ 251], 00:35:37.747 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 317], 60.00th=[ 338], 00:35:37.747 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 376], 95.00th=[ 384], 00:35:37.747 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:35:37.747 | 99.99th=[ 506] 00:35:37.747 bw ( KiB/s): min= 128, max= 256, per=4.40%, avg=203.20, stdev=58.75, samples=20 00:35:37.747 iops : min= 32, max= 64, avg=50.80, stdev=14.69, samples=20 00:35:37.747 lat (msec) : 250=20.23%, 500=79.39%, 750=0.38% 00:35:37.747 cpu : usr=98.11%, sys=1.20%, ctx=97, majf=0, minf=47 00:35:37.747 IO depths : 1=2.7%, 2=7.6%, 4=21.0%, 8=58.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=92.9%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961547: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10073msec) 00:35:37.747 slat (usec): min=14, max=158, avg=98.12, stdev=20.59 00:35:37.747 clat (msec): min=312, max=405, avg=358.85, stdev=19.51 00:35:37.747 lat (msec): min=312, max=405, avg=358.95, stdev=19.51 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 313], 5.00th=[ 321], 10.00th=[ 326], 20.00th=[ 342], 00:35:37.747 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.747 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 384], 00:35:37.747 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:35:37.747 | 99.99th=[ 405] 00:35:37.747 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=172.80, stdev=62.64, samples=20 00:35:37.747 iops : min= 32, max= 64, avg=43.20, stdev=15.66, samples=20 00:35:37.747 lat (msec) : 500=100.00% 00:35:37.747 cpu : usr=97.72%, sys=1.43%, ctx=161, majf=0, minf=41 00:35:37.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961548: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10090msec) 00:35:37.747 slat (usec): min=6, max=157, avg=74.76, stdev=33.94 00:35:37.747 clat (msec): min=103, max=475, avg=335.66, stdev=66.03 00:35:37.747 lat (msec): min=103, max=475, avg=335.74, stdev=66.03 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 266], 20.00th=[ 321], 00:35:37.747 | 30.00th=[ 342], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 368], 00:35:37.747 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 376], 95.00th=[ 380], 00:35:37.747 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 477], 99.95th=[ 477], 00:35:37.747 | 99.99th=[ 477] 00:35:37.747 bw ( KiB/s): min= 128, max= 256, per=4.01%, avg=185.60, stdev=63.87, samples=20 00:35:37.747 iops : min= 32, max= 64, avg=46.40, stdev=15.97, samples=20 00:35:37.747 lat (msec) : 250=7.08%, 500=92.92% 00:35:37.747 cpu : usr=98.38%, sys=1.15%, ctx=25, majf=0, minf=31 00:35:37.747 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961549: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10146msec) 00:35:37.747 slat (usec): min=5, max=167, avg=53.84, stdev=37.00 00:35:37.747 clat (msec): min=210, max=600, avg=361.70, stdev=63.39 00:35:37.747 lat (msec): min=210, max=600, avg=361.75, stdev=63.39 00:35:37.747 clat percentiles (msec): 00:35:37.747 | 1.00th=[ 224], 5.00th=[ 243], 10.00th=[ 326], 20.00th=[ 338], 00:35:37.747 | 30.00th=[ 342], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.747 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 388], 95.00th=[ 498], 00:35:37.747 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:35:37.747 | 99.99th=[ 600] 00:35:37.747 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=181.89, stdev=60.15, samples=19 00:35:37.747 iops : min= 32, max= 64, avg=45.47, stdev=15.04, samples=19 00:35:37.747 lat (msec) : 250=5.36%, 500=89.73%, 750=4.91% 00:35:37.747 cpu : usr=98.63%, sys=0.96%, ctx=18, majf=0, minf=34 00:35:37.747 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:37.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.747 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.747 filename0: (groupid=0, jobs=1): err= 0: pid=3961550: Tue Jul 23 10:56:24 2024 00:35:37.747 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10147msec) 00:35:37.748 slat (usec): min=18, max=140, avg=94.06, stdev=21.59 00:35:37.748 clat (msec): min=231, max=499, avg=359.23, stdev=26.25 00:35:37.748 lat (msec): min=231, max=499, avg=359.33, stdev=26.25 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 313], 5.00th=[ 321], 10.00th=[ 326], 20.00th=[ 342], 00:35:37.748 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.748 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 388], 00:35:37.748 | 99.00th=[ 460], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 498], 00:35:37.748 | 99.99th=[ 498] 00:35:37.748 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=172.80, stdev=57.95, samples=20 00:35:37.748 iops : min= 32, max= 64, avg=43.20, stdev=14.49, samples=20 00:35:37.748 lat (msec) : 250=0.45%, 500=99.55% 00:35:37.748 cpu : usr=97.99%, sys=1.41%, ctx=20, majf=0, minf=48 00:35:37.748 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961551: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=50, BW=201KiB/s (206kB/s)(2048KiB/10175msec) 00:35:37.748 slat (nsec): min=5737, max=84702, avg=27964.12, stdev=11744.29 00:35:37.748 clat (msec): min=105, max=483, avg=317.70, stdev=73.50 00:35:37.748 lat (msec): min=105, max=483, avg=317.73, stdev=73.50 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 106], 5.00th=[ 121], 10.00th=[ 247], 20.00th=[ 259], 00:35:37.748 | 30.00th=[ 313], 40.00th=[ 338], 50.00th=[ 342], 60.00th=[ 359], 00:35:37.748 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 376], 95.00th=[ 380], 00:35:37.748 | 99.00th=[ 435], 99.50th=[ 439], 99.90th=[ 485], 99.95th=[ 485], 00:35:37.748 | 99.99th=[ 485] 00:35:37.748 bw ( KiB/s): min= 128, max= 256, per=4.29%, avg=198.40, stdev=65.33, samples=20 00:35:37.748 iops : min= 32, max= 64, avg=49.60, stdev=16.33, samples=20 00:35:37.748 lat (msec) : 250=18.36%, 500=81.64% 00:35:37.748 cpu : usr=97.78%, sys=1.36%, ctx=59, majf=0, minf=73 00:35:37.748 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961552: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10072msec) 00:35:37.748 slat (usec): min=15, max=1101, avg=93.45, stdev=54.91 00:35:37.748 clat (msec): min=215, max=526, avg=358.95, stdev=32.56 00:35:37.748 lat (msec): min=215, max=526, avg=359.04, stdev=32.56 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 249], 5.00th=[ 313], 10.00th=[ 326], 20.00th=[ 342], 00:35:37.748 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.748 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 405], 00:35:37.748 | 99.00th=[ 477], 99.50th=[ 514], 99.90th=[ 527], 99.95th=[ 527], 00:35:37.748 | 99.99th=[ 527] 00:35:37.748 bw ( KiB/s): min= 112, max= 256, per=3.72%, avg=172.80, stdev=61.33, samples=20 00:35:37.748 iops : min= 28, max= 64, avg=43.20, stdev=15.33, samples=20 00:35:37.748 lat (msec) : 250=1.34%, 500=97.77%, 750=0.89% 00:35:37.748 cpu : usr=98.49%, sys=0.97%, ctx=41, majf=0, minf=45 00:35:37.748 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961553: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=61, BW=245KiB/s (251kB/s)(2488KiB/10169msec) 00:35:37.748 slat (usec): min=9, max=157, avg=45.30, stdev=34.83 00:35:37.748 clat (msec): min=102, max=436, avg=260.86, stdev=58.08 00:35:37.748 lat (msec): min=102, max=436, avg=260.91, stdev=58.10 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 203], 20.00th=[ 226], 00:35:37.748 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 264], 00:35:37.748 | 70.00th=[ 271], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 355], 00:35:37.748 | 99.00th=[ 359], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 439], 00:35:37.748 | 99.99th=[ 439] 00:35:37.748 bw ( KiB/s): min= 128, max= 368, per=5.24%, avg=242.40, stdev=64.69, samples=20 00:35:37.748 iops : min= 32, max= 92, avg=60.60, stdev=16.17, samples=20 00:35:37.748 lat (msec) : 250=35.37%, 500=64.63% 00:35:37.748 cpu : usr=97.55%, sys=1.64%, ctx=126, majf=0, minf=62 00:35:37.748 IO depths : 1=1.9%, 2=8.2%, 4=25.1%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961554: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10126msec) 00:35:37.748 slat (usec): min=10, max=120, avg=30.64, stdev=17.81 00:35:37.748 clat (msec): min=239, max=505, avg=346.65, stdev=49.96 00:35:37.748 lat (msec): min=239, max=505, avg=346.68, stdev=49.96 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 251], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 313], 00:35:37.748 | 30.00th=[ 342], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 363], 00:35:37.748 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 372], 95.00th=[ 384], 00:35:37.748 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:35:37.748 | 99.99th=[ 506] 00:35:37.748 bw ( KiB/s): min= 128, max= 256, per=4.07%, avg=188.63, stdev=60.94, samples=19 00:35:37.748 iops : min= 32, max= 64, avg=47.16, stdev=15.24, samples=19 00:35:37.748 lat (msec) : 250=0.86%, 500=95.26%, 750=3.88% 00:35:37.748 cpu : usr=98.31%, sys=1.20%, ctx=37, majf=0, minf=41 00:35:37.748 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961555: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=44, BW=176KiB/s (180kB/s)(1784KiB/10128msec) 00:35:37.748 slat (usec): min=19, max=156, avg=78.50, stdev=31.64 00:35:37.748 clat (msec): min=175, max=583, avg=362.35, stdev=71.36 00:35:37.748 lat (msec): min=175, max=583, avg=362.43, stdev=71.36 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 176], 5.00th=[ 226], 10.00th=[ 309], 20.00th=[ 338], 00:35:37.748 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 368], 60.00th=[ 368], 00:35:37.748 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 418], 95.00th=[ 502], 00:35:37.748 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.748 | 99.99th=[ 584] 00:35:37.748 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=181.05, stdev=55.70, samples=19 00:35:37.748 iops : min= 32, max= 64, avg=45.26, stdev=13.92, samples=19 00:35:37.748 lat (msec) : 250=5.83%, 500=89.24%, 750=4.93% 00:35:37.748 cpu : usr=98.06%, sys=1.33%, ctx=82, majf=0, minf=43 00:35:37.748 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961556: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=44, BW=176KiB/s (180kB/s)(1784KiB/10124msec) 00:35:37.748 slat (usec): min=15, max=145, avg=97.14, stdev=20.95 00:35:37.748 clat (msec): min=175, max=580, avg=362.01, stdev=70.93 00:35:37.748 lat (msec): min=175, max=580, avg=362.11, stdev=70.93 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 176], 5.00th=[ 226], 10.00th=[ 309], 20.00th=[ 338], 00:35:37.748 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 368], 60.00th=[ 368], 00:35:37.748 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 418], 95.00th=[ 498], 00:35:37.748 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.748 | 99.99th=[ 584] 00:35:37.748 bw ( KiB/s): min= 127, max= 256, per=3.90%, avg=181.00, stdev=55.75, samples=19 00:35:37.748 iops : min= 31, max= 64, avg=45.21, stdev=13.98, samples=19 00:35:37.748 lat (msec) : 250=5.83%, 500=89.69%, 750=4.48% 00:35:37.748 cpu : usr=98.09%, sys=1.37%, ctx=123, majf=0, minf=50 00:35:37.748 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.748 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.748 filename1: (groupid=0, jobs=1): err= 0: pid=3961557: Tue Jul 23 10:56:24 2024 00:35:37.748 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10072msec) 00:35:37.748 slat (usec): min=17, max=140, avg=96.91, stdev=19.07 00:35:37.748 clat (msec): min=244, max=512, avg=358.84, stdev=25.28 00:35:37.748 lat (msec): min=244, max=512, avg=358.94, stdev=25.28 00:35:37.748 clat percentiles (msec): 00:35:37.748 | 1.00th=[ 313], 5.00th=[ 321], 10.00th=[ 326], 20.00th=[ 342], 00:35:37.748 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.748 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 384], 00:35:37.748 | 99.00th=[ 405], 99.50th=[ 481], 99.90th=[ 514], 99.95th=[ 514], 00:35:37.748 | 99.99th=[ 514] 00:35:37.748 bw ( KiB/s): min= 128, max= 256, per=3.72%, avg=172.80, stdev=62.64, samples=20 00:35:37.748 iops : min= 32, max= 64, avg=43.20, stdev=15.66, samples=20 00:35:37.749 lat (msec) : 250=0.45%, 500=99.11%, 750=0.45% 00:35:37.749 cpu : usr=97.36%, sys=1.70%, ctx=122, majf=0, minf=52 00:35:37.749 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename1: (groupid=0, jobs=1): err= 0: pid=3961558: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10165msec) 00:35:37.749 slat (usec): min=6, max=158, avg=94.56, stdev=25.68 00:35:37.749 clat (msec): min=103, max=536, avg=338.04, stdev=73.24 00:35:37.749 lat (msec): min=103, max=536, avg=338.14, stdev=73.26 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 257], 20.00th=[ 321], 00:35:37.749 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 368], 00:35:37.749 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 380], 95.00th=[ 388], 00:35:37.749 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 535], 99.95th=[ 535], 00:35:37.749 | 99.99th=[ 535] 00:35:37.749 bw ( KiB/s): min= 128, max= 256, per=4.01%, avg=185.60, stdev=63.87, samples=20 00:35:37.749 iops : min= 32, max= 64, avg=46.40, stdev=15.97, samples=20 00:35:37.749 lat (msec) : 250=8.75%, 500=90.00%, 750=1.25% 00:35:37.749 cpu : usr=97.93%, sys=1.44%, ctx=41, majf=0, minf=46 00:35:37.749 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961559: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10094msec) 00:35:37.749 slat (usec): min=5, max=185, avg=57.22, stdev=38.68 00:35:37.749 clat (msec): min=107, max=387, avg=279.98, stdev=63.07 00:35:37.749 lat (msec): min=108, max=387, avg=280.03, stdev=63.10 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 109], 5.00th=[ 121], 10.00th=[ 218], 20.00th=[ 247], 00:35:37.749 | 30.00th=[ 251], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 309], 00:35:37.749 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 359], 95.00th=[ 368], 00:35:37.749 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 388], 99.95th=[ 388], 00:35:37.749 | 99.99th=[ 388] 00:35:37.749 bw ( KiB/s): min= 128, max= 368, per=4.83%, avg=224.00, stdev=74.14, samples=20 00:35:37.749 iops : min= 32, max= 92, avg=56.00, stdev=18.54, samples=20 00:35:37.749 lat (msec) : 250=26.74%, 500=73.26% 00:35:37.749 cpu : usr=97.95%, sys=1.30%, ctx=138, majf=0, minf=53 00:35:37.749 IO depths : 1=0.9%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961560: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=49, BW=198KiB/s (203kB/s)(2008KiB/10147msec) 00:35:37.749 slat (usec): min=8, max=159, avg=41.67, stdev=39.19 00:35:37.749 clat (msec): min=187, max=496, avg=320.95, stdev=62.11 00:35:37.749 lat (msec): min=187, max=496, avg=320.99, stdev=62.13 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 188], 5.00th=[ 228], 10.00th=[ 228], 20.00th=[ 255], 00:35:37.749 | 30.00th=[ 266], 40.00th=[ 326], 50.00th=[ 342], 60.00th=[ 351], 00:35:37.749 | 70.00th=[ 359], 80.00th=[ 372], 90.00th=[ 376], 95.00th=[ 388], 00:35:37.749 | 99.00th=[ 472], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:35:37.749 | 99.99th=[ 498] 00:35:37.749 bw ( KiB/s): min= 128, max= 256, per=4.20%, avg=194.40, stdev=61.04, samples=20 00:35:37.749 iops : min= 32, max= 64, avg=48.60, stdev=15.26, samples=20 00:35:37.749 lat (msec) : 250=17.93%, 500=82.07% 00:35:37.749 cpu : usr=98.52%, sys=1.03%, ctx=23, majf=0, minf=48 00:35:37.749 IO depths : 1=2.8%, 2=7.8%, 4=21.1%, 8=58.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961561: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10139msec) 00:35:37.749 slat (usec): min=5, max=127, avg=30.88, stdev=27.06 00:35:37.749 clat (msec): min=202, max=533, avg=347.11, stdev=49.18 00:35:37.749 lat (msec): min=202, max=533, avg=347.14, stdev=49.19 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 224], 5.00th=[ 253], 10.00th=[ 266], 20.00th=[ 326], 00:35:37.749 | 30.00th=[ 342], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.749 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 384], 00:35:37.749 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 535], 99.95th=[ 535], 00:35:37.749 | 99.99th=[ 535] 00:35:37.749 bw ( KiB/s): min= 128, max= 256, per=3.88%, avg=179.20, stdev=59.78, samples=20 00:35:37.749 iops : min= 32, max= 64, avg=44.80, stdev=14.94, samples=20 00:35:37.749 lat (msec) : 250=3.45%, 500=95.69%, 750=0.86% 00:35:37.749 cpu : usr=98.56%, sys=1.02%, ctx=28, majf=0, minf=51 00:35:37.749 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961562: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10128msec) 00:35:37.749 slat (usec): min=5, max=133, avg=45.34, stdev=35.17 00:35:37.749 clat (msec): min=223, max=589, avg=361.29, stdev=55.46 00:35:37.749 lat (msec): min=223, max=589, avg=361.33, stdev=55.46 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 226], 5.00th=[ 313], 10.00th=[ 326], 20.00th=[ 342], 00:35:37.749 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.749 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 380], 95.00th=[ 388], 00:35:37.749 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:35:37.749 | 99.99th=[ 592] 00:35:37.749 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=181.89, stdev=63.38, samples=19 00:35:37.749 iops : min= 32, max= 64, avg=45.47, stdev=15.84, samples=19 00:35:37.749 lat (msec) : 250=4.02%, 500=91.96%, 750=4.02% 00:35:37.749 cpu : usr=98.80%, sys=0.81%, ctx=19, majf=0, minf=53 00:35:37.749 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961563: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10154msec) 00:35:37.749 slat (usec): min=12, max=131, avg=39.36, stdev=19.94 00:35:37.749 clat (msec): min=177, max=514, avg=349.76, stdev=47.12 00:35:37.749 lat (msec): min=177, max=514, avg=349.80, stdev=47.11 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 178], 5.00th=[ 288], 10.00th=[ 309], 20.00th=[ 330], 00:35:37.749 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.749 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 388], 95.00th=[ 393], 00:35:37.749 | 99.00th=[ 460], 99.50th=[ 493], 99.90th=[ 514], 99.95th=[ 514], 00:35:37.749 | 99.99th=[ 514] 00:35:37.749 bw ( KiB/s): min= 128, max= 256, per=3.88%, avg=179.20, stdev=64.34, samples=20 00:35:37.749 iops : min= 32, max= 64, avg=44.80, stdev=16.08, samples=20 00:35:37.749 lat (msec) : 250=4.31%, 500=95.26%, 750=0.43% 00:35:37.749 cpu : usr=98.54%, sys=1.02%, ctx=17, majf=0, minf=33 00:35:37.749 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.749 filename2: (groupid=0, jobs=1): err= 0: pid=3961564: Tue Jul 23 10:56:24 2024 00:35:37.749 read: IOPS=44, BW=176KiB/s (180kB/s)(1784KiB/10124msec) 00:35:37.749 slat (usec): min=13, max=147, avg=62.96, stdev=38.62 00:35:37.749 clat (msec): min=177, max=579, avg=362.38, stdev=62.86 00:35:37.749 lat (msec): min=177, max=579, avg=362.44, stdev=62.86 00:35:37.749 clat percentiles (msec): 00:35:37.749 | 1.00th=[ 178], 5.00th=[ 305], 10.00th=[ 317], 20.00th=[ 338], 00:35:37.749 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 368], 60.00th=[ 368], 00:35:37.749 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 393], 95.00th=[ 464], 00:35:37.749 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.749 | 99.99th=[ 584] 00:35:37.749 bw ( KiB/s): min= 127, max= 256, per=3.90%, avg=181.00, stdev=62.48, samples=19 00:35:37.749 iops : min= 31, max= 64, avg=45.21, stdev=15.66, samples=19 00:35:37.749 lat (msec) : 250=4.48%, 500=91.93%, 750=3.59% 00:35:37.749 cpu : usr=98.52%, sys=1.07%, ctx=23, majf=0, minf=29 00:35:37.749 IO depths : 1=5.2%, 2=11.4%, 4=25.1%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.749 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.750 filename2: (groupid=0, jobs=1): err= 0: pid=3961565: Tue Jul 23 10:56:24 2024 00:35:37.750 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10127msec) 00:35:37.750 slat (usec): min=9, max=105, avg=19.55, stdev=14.85 00:35:37.750 clat (msec): min=177, max=582, avg=361.48, stdev=72.72 00:35:37.750 lat (msec): min=177, max=582, avg=361.50, stdev=72.72 00:35:37.750 clat percentiles (msec): 00:35:37.750 | 1.00th=[ 178], 5.00th=[ 201], 10.00th=[ 309], 20.00th=[ 338], 00:35:37.750 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 368], 00:35:37.750 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 418], 95.00th=[ 506], 00:35:37.750 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.750 | 99.99th=[ 584] 00:35:37.750 bw ( KiB/s): min= 127, max= 256, per=3.92%, avg=181.84, stdev=63.42, samples=19 00:35:37.750 iops : min= 31, max= 64, avg=45.42, stdev=15.89, samples=19 00:35:37.750 lat (msec) : 250=6.70%, 500=87.50%, 750=5.80% 00:35:37.750 cpu : usr=98.58%, sys=1.01%, ctx=19, majf=0, minf=44 00:35:37.750 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:37.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.750 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.750 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.750 filename2: (groupid=0, jobs=1): err= 0: pid=3961566: Tue Jul 23 10:56:24 2024 00:35:37.750 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10119msec) 00:35:37.750 slat (usec): min=9, max=146, avg=73.59, stdev=41.52 00:35:37.750 clat (msec): min=199, max=581, avg=360.78, stdev=54.84 00:35:37.750 lat (msec): min=199, max=581, avg=360.85, stdev=54.83 00:35:37.750 clat percentiles (msec): 00:35:37.750 | 1.00th=[ 249], 5.00th=[ 309], 10.00th=[ 317], 20.00th=[ 334], 00:35:37.750 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 368], 00:35:37.750 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 384], 95.00th=[ 388], 00:35:37.750 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:35:37.750 | 99.99th=[ 584] 00:35:37.750 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=181.84, stdev=63.41, samples=19 00:35:37.750 iops : min= 32, max= 64, avg=45.42, stdev=15.88, samples=19 00:35:37.750 lat (msec) : 250=1.34%, 500=94.64%, 750=4.02% 00:35:37.750 cpu : usr=98.36%, sys=1.20%, ctx=42, majf=0, minf=54 00:35:37.750 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:37.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.750 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.750 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.750 00:35:37.750 Run status group 0 (all jobs): 00:35:37.750 READ: bw=4618KiB/s (4729kB/s), 176KiB/s-258KiB/s (180kB/s-264kB/s), io=45.9MiB (48.1MB), run=10072-10175msec 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 bdev_null0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 [2024-07-23 10:56:24.810992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 bdev_null1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:37.750 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.751 { 00:35:37.751 "params": { 00:35:37.751 "name": "Nvme$subsystem", 00:35:37.751 "trtype": "$TEST_TRANSPORT", 00:35:37.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.751 "adrfam": "ipv4", 00:35:37.751 "trsvcid": "$NVMF_PORT", 00:35:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.751 "hdgst": ${hdgst:-false}, 00:35:37.751 "ddgst": ${ddgst:-false} 00:35:37.751 }, 00:35:37.751 "method": "bdev_nvme_attach_controller" 00:35:37.751 } 00:35:37.751 EOF 00:35:37.751 )") 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.751 { 00:35:37.751 "params": { 00:35:37.751 "name": "Nvme$subsystem", 00:35:37.751 "trtype": "$TEST_TRANSPORT", 00:35:37.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.751 "adrfam": "ipv4", 00:35:37.751 "trsvcid": "$NVMF_PORT", 00:35:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.751 "hdgst": ${hdgst:-false}, 00:35:37.751 "ddgst": ${ddgst:-false} 00:35:37.751 }, 00:35:37.751 "method": "bdev_nvme_attach_controller" 00:35:37.751 } 00:35:37.751 EOF 00:35:37.751 )") 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:37.751 "params": { 00:35:37.751 "name": "Nvme0", 00:35:37.751 "trtype": "tcp", 00:35:37.751 "traddr": "10.0.0.2", 00:35:37.751 "adrfam": "ipv4", 00:35:37.751 "trsvcid": "4420", 00:35:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.751 "hdgst": false, 00:35:37.751 "ddgst": false 00:35:37.751 }, 00:35:37.751 "method": "bdev_nvme_attach_controller" 00:35:37.751 },{ 00:35:37.751 "params": { 00:35:37.751 "name": "Nvme1", 00:35:37.751 "trtype": "tcp", 00:35:37.751 "traddr": "10.0.0.2", 00:35:37.751 "adrfam": "ipv4", 00:35:37.751 "trsvcid": "4420", 00:35:37.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.751 "hdgst": false, 00:35:37.751 "ddgst": false 00:35:37.751 }, 00:35:37.751 "method": "bdev_nvme_attach_controller" 00:35:37.751 }' 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:37.751 10:56:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.751 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:37.751 ... 00:35:37.751 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:37.751 ... 00:35:37.751 fio-3.35 00:35:37.751 Starting 4 threads 00:35:37.751 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.021 00:35:43.021 filename0: (groupid=0, jobs=1): err= 0: pid=3962609: Tue Jul 23 10:56:30 2024 00:35:43.021 read: IOPS=1647, BW=12.9MiB/s (13.5MB/s)(64.4MiB/5004msec) 00:35:43.021 slat (nsec): min=5876, max=66634, avg=23752.67, stdev=12338.28 00:35:43.021 clat (usec): min=1242, max=8456, avg=4771.02, stdev=389.78 00:35:43.021 lat (usec): min=1262, max=8478, avg=4794.77, stdev=388.72 00:35:43.021 clat percentiles (usec): 00:35:43.021 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4621], 00:35:43.021 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4752], 00:35:43.021 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5014], 00:35:43.021 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8094], 99.95th=[ 8455], 00:35:43.021 | 99.99th=[ 8455] 00:35:43.021 bw ( KiB/s): min=12656, max=13466, per=24.94%, avg=13181.80, stdev=217.26, samples=10 00:35:43.021 iops : min= 1582, max= 1683, avg=1647.70, stdev=27.12, samples=10 00:35:43.021 lat (msec) : 2=0.06%, 4=1.87%, 10=98.07% 00:35:43.021 cpu : usr=95.52%, sys=4.00%, ctx=21, majf=0, minf=10 00:35:43.021 IO depths : 1=1.0%, 2=19.5%, 4=54.0%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 issued rwts: total=8245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:43.021 filename0: (groupid=0, jobs=1): err= 0: pid=3962610: Tue Jul 23 10:56:30 2024 00:35:43.021 read: IOPS=1641, BW=12.8MiB/s (13.4MB/s)(64.1MiB/5002msec) 00:35:43.021 slat (nsec): min=6055, max=77564, avg=23359.95, stdev=11848.95 00:35:43.021 clat (usec): min=946, max=8833, avg=4785.24, stdev=489.72 00:35:43.021 lat (usec): min=960, max=8841, avg=4808.60, stdev=488.83 00:35:43.021 clat percentiles (usec): 00:35:43.021 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 4621], 00:35:43.021 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4752], 00:35:43.021 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5276], 00:35:43.021 | 99.00th=[ 7177], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 8848], 00:35:43.021 | 99.99th=[ 8848] 00:35:43.021 bw ( KiB/s): min=12928, max=13312, per=24.87%, avg=13144.89, stdev=109.14, samples=9 00:35:43.021 iops : min= 1616, max= 1664, avg=1643.11, stdev=13.64, samples=9 00:35:43.021 lat (usec) : 1000=0.02% 00:35:43.021 lat (msec) : 2=0.21%, 4=1.68%, 10=98.09% 00:35:43.021 cpu : usr=95.76%, sys=3.78%, ctx=11, majf=0, minf=9 00:35:43.021 IO depths : 1=0.3%, 2=19.7%, 4=53.7%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 issued rwts: total=8210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:43.021 filename1: (groupid=0, jobs=1): err= 0: pid=3962611: Tue Jul 23 10:56:30 2024 00:35:43.021 read: IOPS=1657, BW=12.9MiB/s (13.6MB/s)(64.8MiB/5002msec) 00:35:43.021 slat (nsec): min=5854, max=97512, avg=25221.98, stdev=12511.32 00:35:43.021 clat (usec): min=967, max=8837, avg=4723.18, stdev=432.06 00:35:43.021 lat (usec): min=981, max=8852, avg=4748.40, stdev=432.29 00:35:43.021 clat percentiles (usec): 00:35:43.021 | 1.00th=[ 3523], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 4621], 00:35:43.021 | 30.00th=[ 4686], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4752], 00:35:43.021 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:35:43.021 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 8586], 00:35:43.021 | 99.99th=[ 8848] 00:35:43.021 bw ( KiB/s): min=12816, max=13936, per=25.14%, avg=13285.33, stdev=291.20, samples=9 00:35:43.021 iops : min= 1602, max= 1742, avg=1660.67, stdev=36.40, samples=9 00:35:43.021 lat (usec) : 1000=0.01% 00:35:43.021 lat (msec) : 2=0.19%, 4=2.16%, 10=97.64% 00:35:43.021 cpu : usr=95.86%, sys=3.36%, ctx=95, majf=0, minf=9 00:35:43.021 IO depths : 1=1.2%, 2=23.0%, 4=51.2%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 issued rwts: total=8289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:43.021 filename1: (groupid=0, jobs=1): err= 0: pid=3962612: Tue Jul 23 10:56:30 2024 00:35:43.021 read: IOPS=1662, BW=13.0MiB/s (13.6MB/s)(64.9MiB/5001msec) 00:35:43.021 slat (usec): min=5, max=107, avg=17.46, stdev=11.81 00:35:43.021 clat (usec): min=1837, max=8602, avg=4755.54, stdev=295.43 00:35:43.021 lat (usec): min=1855, max=8623, avg=4773.00, stdev=296.25 00:35:43.021 clat percentiles (usec): 00:35:43.021 | 1.00th=[ 3916], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 4686], 00:35:43.021 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:35:43.021 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:35:43.021 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 8094], 99.95th=[ 8160], 00:35:43.021 | 99.99th=[ 8586] 00:35:43.021 bw ( KiB/s): min=13024, max=13824, per=25.22%, avg=13328.00, stdev=237.86, samples=9 00:35:43.021 iops : min= 1628, max= 1728, avg=1666.00, stdev=29.73, samples=9 00:35:43.021 lat (msec) : 2=0.02%, 4=1.74%, 10=98.23% 00:35:43.021 cpu : usr=94.50%, sys=4.50%, ctx=33, majf=0, minf=2 00:35:43.021 IO depths : 1=0.4%, 2=12.7%, 4=60.9%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.021 issued rwts: total=8313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:43.021 00:35:43.021 Run status group 0 (all jobs): 00:35:43.021 READ: bw=51.6MiB/s (54.1MB/s), 12.8MiB/s-13.0MiB/s (13.4MB/s-13.6MB/s), io=258MiB (271MB), run=5001-5004msec 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:43.021 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 00:35:43.022 real 0m24.004s 00:35:43.022 user 4m35.868s 00:35:43.022 sys 0m5.406s 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 ************************************ 00:35:43.022 END TEST fio_dif_rand_params 00:35:43.022 ************************************ 00:35:43.022 10:56:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:43.022 10:56:30 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:43.022 10:56:30 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 ************************************ 00:35:43.022 START TEST fio_dif_digest 00:35:43.022 ************************************ 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 bdev_null0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.022 [2024-07-23 10:56:30.961246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:43.022 { 00:35:43.022 "params": { 00:35:43.022 "name": "Nvme$subsystem", 00:35:43.022 "trtype": "$TEST_TRANSPORT", 00:35:43.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.022 "adrfam": "ipv4", 00:35:43.022 "trsvcid": "$NVMF_PORT", 00:35:43.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.022 "hdgst": ${hdgst:-false}, 00:35:43.022 "ddgst": ${ddgst:-false} 00:35:43.022 }, 00:35:43.022 "method": "bdev_nvme_attach_controller" 00:35:43.022 } 00:35:43.022 EOF 00:35:43.022 )") 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:43.022 "params": { 00:35:43.022 "name": "Nvme0", 00:35:43.022 "trtype": "tcp", 00:35:43.022 "traddr": "10.0.0.2", 00:35:43.022 "adrfam": "ipv4", 00:35:43.022 "trsvcid": "4420", 00:35:43.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.022 "hdgst": true, 00:35:43.022 "ddgst": true 00:35:43.022 }, 00:35:43.022 "method": "bdev_nvme_attach_controller" 00:35:43.022 }' 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:43.022 10:56:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:43.022 10:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:43.022 10:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:43.022 10:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:43.022 10:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.022 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:43.022 ... 00:35:43.022 fio-3.35 00:35:43.022 Starting 3 threads 00:35:43.022 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.217 00:35:55.217 filename0: (groupid=0, jobs=1): err= 0: pid=3963186: Tue Jul 23 10:56:41 2024 00:35:55.217 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10045msec) 00:35:55.217 slat (nsec): min=8098, max=38498, avg=14791.52, stdev=3685.40 00:35:55.217 clat (usec): min=9765, max=55698, avg=16397.40, stdev=1717.82 00:35:55.217 lat (usec): min=9781, max=55711, avg=16412.19, stdev=1717.88 00:35:55.217 clat percentiles (usec): 00:35:55.217 | 1.00th=[11863], 5.00th=[14484], 10.00th=[15008], 20.00th=[15533], 00:35:55.217 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:35:55.217 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:35:55.217 | 99.00th=[19268], 99.50th=[20055], 99.90th=[47973], 99.95th=[55837], 00:35:55.217 | 99.99th=[55837] 00:35:55.217 bw ( KiB/s): min=22573, max=24832, per=35.98%, avg=23439.05, stdev=511.09, samples=20 00:35:55.217 iops : min= 176, max= 194, avg=183.10, stdev= 4.02, samples=20 00:35:55.217 lat (msec) : 10=0.05%, 20=99.45%, 50=0.44%, 100=0.05% 00:35:55.217 cpu : usr=95.05%, sys=4.54%, ctx=26, majf=0, minf=161 00:35:55.217 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.217 filename0: (groupid=0, jobs=1): err= 0: pid=3963187: Tue Jul 23 10:56:41 2024 00:35:55.217 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(226MiB/10047msec) 00:35:55.217 slat (nsec): min=8184, max=66347, avg=23223.75, stdev=6521.80 00:35:55.217 clat (usec): min=11810, max=62765, avg=16655.15, stdev=3351.14 00:35:55.217 lat (usec): min=11836, max=62780, avg=16678.38, stdev=3350.95 00:35:55.217 clat percentiles (usec): 00:35:55.217 | 1.00th=[13960], 5.00th=[14746], 10.00th=[15139], 20.00th=[15664], 00:35:55.217 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:35:55.217 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:35:55.217 | 99.00th=[20317], 99.50th=[54789], 99.90th=[62129], 99.95th=[62653], 00:35:55.217 | 99.99th=[62653] 00:35:55.217 bw ( KiB/s): min=19200, max=24064, per=35.40%, avg=23065.60, stdev=1141.55, samples=20 00:35:55.217 iops : min= 150, max= 188, avg=180.20, stdev= 8.92, samples=20 00:35:55.217 lat (msec) : 20=98.67%, 50=0.78%, 100=0.55% 00:35:55.217 cpu : usr=94.87%, sys=4.64%, ctx=21, majf=0, minf=170 00:35:55.217 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.217 filename0: (groupid=0, jobs=1): err= 0: pid=3963188: Tue Jul 23 10:56:41 2024 00:35:55.217 read: IOPS=147, BW=18.4MiB/s (19.3MB/s)(185MiB/10045msec) 00:35:55.217 slat (nsec): min=7977, max=33085, avg=14828.33, stdev=3157.97 00:35:55.217 clat (usec): min=12404, max=58776, avg=20355.04, stdev=2070.07 00:35:55.217 lat (usec): min=12416, max=58794, avg=20369.87, stdev=2070.24 00:35:55.217 clat percentiles (usec): 00:35:55.217 | 1.00th=[15926], 5.00th=[18220], 10.00th=[18744], 20.00th=[19268], 00:35:55.217 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:35:55.217 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[22676], 00:35:55.217 | 99.00th=[24773], 99.50th=[26084], 99.90th=[53216], 99.95th=[58983], 00:35:55.217 | 99.99th=[58983] 00:35:55.217 bw ( KiB/s): min=16160, max=19712, per=28.98%, avg=18881.60, stdev=712.26, samples=20 00:35:55.217 iops : min= 126, max= 154, avg=147.50, stdev= 5.61, samples=20 00:35:55.217 lat (msec) : 20=42.92%, 50=56.94%, 100=0.14% 00:35:55.217 cpu : usr=94.43%, sys=5.16%, ctx=25, majf=0, minf=107 00:35:55.217 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.217 issued rwts: total=1477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.217 00:35:55.217 Run status group 0 (all jobs): 00:35:55.217 READ: bw=63.6MiB/s (66.7MB/s), 18.4MiB/s-22.8MiB/s (19.3MB/s-23.9MB/s), io=639MiB (670MB), run=10045-10047msec 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.217 00:35:55.217 real 0m10.993s 00:35:55.217 user 0m29.361s 00:35:55.217 sys 0m1.670s 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:55.217 10:56:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.217 ************************************ 00:35:55.217 END TEST fio_dif_digest 00:35:55.217 ************************************ 00:35:55.217 10:56:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:55.217 10:56:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:55.217 rmmod nvme_tcp 00:35:55.217 rmmod nvme_fabrics 00:35:55.217 rmmod nvme_keyring 00:35:55.217 10:56:41 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:55.217 10:56:42 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:55.217 10:56:42 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:55.218 10:56:42 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3958583 ']' 00:35:55.218 10:56:42 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3958583 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3958583 ']' 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3958583 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3958583 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3958583' 00:35:55.218 killing process with pid 3958583 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3958583 00:35:55.218 10:56:42 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3958583 00:35:55.218 10:56:42 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:55.218 10:56:42 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:55.218 Waiting for block devices as requested 00:35:55.218 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:35:55.218 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:35:55.218 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:35:55.477 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:35:55.477 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:35:55.477 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:35:55.477 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:35:55.737 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:35:55.737 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:35:55.737 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:35:55.997 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:35:55.997 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:35:55.997 10:56:44 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:55.997 10:56:44 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:55.997 10:56:44 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:55.997 10:56:44 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:55.997 10:56:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.997 10:56:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:55.997 10:56:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.531 10:56:46 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:58.531 00:35:58.531 real 1m5.125s 00:35:58.531 user 6m30.154s 00:35:58.531 sys 0m16.115s 00:35:58.531 10:56:46 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:58.531 10:56:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.531 ************************************ 00:35:58.531 END TEST nvmf_dif 00:35:58.531 ************************************ 00:35:58.531 10:56:46 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:58.531 10:56:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:58.531 10:56:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:58.531 10:56:46 -- common/autotest_common.sh@10 -- # set +x 00:35:58.531 ************************************ 00:35:58.531 START TEST nvmf_abort_qd_sizes 00:35:58.531 ************************************ 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:58.531 * Looking for test storage... 00:35:58.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.531 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:58.532 10:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:59.908 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:35:59.909 Found 0000:08:00.0 (0x8086 - 0x159b) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:35:59.909 Found 0000:08:00.1 (0x8086 - 0x159b) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:35:59.909 Found net devices under 0000:08:00.0: cvl_0_0 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:35:59.909 Found net devices under 0000:08:00.1: cvl_0_1 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:59.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:35:59.909 00:35:59.909 --- 10.0.0.2 ping statistics --- 00:35:59.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.909 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:35:59.909 00:35:59.909 --- 10.0.0.1 ping statistics --- 00:35:59.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.909 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:59.909 10:56:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:00.845 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:00.845 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:01.104 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:01.104 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:02.061 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3966893 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3966893 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3966893 ']' 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:02.061 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.061 [2024-07-23 10:56:50.459851] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:02.061 [2024-07-23 10:56:50.459944] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.061 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.061 [2024-07-23 10:56:50.529267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:02.319 [2024-07-23 10:56:50.621196] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.319 [2024-07-23 10:56:50.621266] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.319 [2024-07-23 10:56:50.621282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.319 [2024-07-23 10:56:50.621295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.319 [2024-07-23 10:56:50.621307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.319 [2024-07-23 10:56:50.622504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.319 [2024-07-23 10:56:50.622598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:02.319 [2024-07-23 10:56:50.622684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:02.319 [2024-07-23 10:56:50.622717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:02.319 10:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.319 ************************************ 00:36:02.319 START TEST spdk_target_abort 00:36:02.319 ************************************ 00:36:02.319 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:02.319 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:02.319 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:36:02.319 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.319 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.596 spdk_targetn1 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.596 [2024-07-23 10:56:53.617465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.596 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.597 [2024-07-23 10:56:53.649922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.597 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.597 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.874 Initializing NVMe Controllers 00:36:08.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:08.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:08.874 Initialization complete. Launching workers. 00:36:08.874 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11662, failed: 0 00:36:08.874 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1211, failed to submit 10451 00:36:08.874 success 781, unsuccess 430, failed 0 00:36:08.874 10:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:08.874 10:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:08.874 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.153 Initializing NVMe Controllers 00:36:12.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.153 Initialization complete. Launching workers. 00:36:12.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8592, failed: 0 00:36:12.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7338 00:36:12.153 success 333, unsuccess 921, failed 0 00:36:12.153 10:57:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.153 10:57:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.153 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.747 Initializing NVMe Controllers 00:36:14.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.747 Initialization complete. Launching workers. 00:36:14.747 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28977, failed: 0 00:36:14.747 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2581, failed to submit 26396 00:36:14.747 success 379, unsuccess 2202, failed 0 00:36:14.747 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:14.747 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.747 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.004 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.004 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:15.004 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.004 10:57:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3966893 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3966893 ']' 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3966893 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3966893 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3966893' 00:36:16.373 killing process with pid 3966893 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3966893 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3966893 00:36:16.373 00:36:16.373 real 0m13.979s 00:36:16.373 user 0m52.975s 00:36:16.373 sys 0m2.387s 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.373 ************************************ 00:36:16.373 END TEST spdk_target_abort 00:36:16.373 ************************************ 00:36:16.373 10:57:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:16.373 10:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:16.373 10:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:16.373 10:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.373 ************************************ 00:36:16.373 START TEST kernel_target_abort 00:36:16.373 ************************************ 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:16.373 10:57:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:17.307 Waiting for block devices as requested 00:36:17.566 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:36:17.566 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:17.566 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:17.824 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:17.824 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:17.824 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:17.824 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:18.083 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:18.083 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:18.083 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:18.083 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:18.340 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:18.340 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:18.340 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:18.597 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:18.597 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:18.597 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:18.597 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:18.854 No valid GPT data, bailing 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:18.854 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:36:18.855 00:36:18.855 Discovery Log Number of Records 2, Generation counter 2 00:36:18.855 =====Discovery Log Entry 0====== 00:36:18.855 trtype: tcp 00:36:18.855 adrfam: ipv4 00:36:18.855 subtype: current discovery subsystem 00:36:18.855 treq: not specified, sq flow control disable supported 00:36:18.855 portid: 1 00:36:18.855 trsvcid: 4420 00:36:18.855 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:18.855 traddr: 10.0.0.1 00:36:18.855 eflags: none 00:36:18.855 sectype: none 00:36:18.855 =====Discovery Log Entry 1====== 00:36:18.855 trtype: tcp 00:36:18.855 adrfam: ipv4 00:36:18.855 subtype: nvme subsystem 00:36:18.855 treq: not specified, sq flow control disable supported 00:36:18.855 portid: 1 00:36:18.855 trsvcid: 4420 00:36:18.855 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:18.855 traddr: 10.0.0.1 00:36:18.855 eflags: none 00:36:18.855 sectype: none 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.855 10:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.855 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.132 Initializing NVMe Controllers 00:36:22.132 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:22.132 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:22.132 Initialization complete. Launching workers. 00:36:22.132 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45167, failed: 0 00:36:22.132 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45167, failed to submit 0 00:36:22.132 success 0, unsuccess 45167, failed 0 00:36:22.132 10:57:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:22.132 10:57:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:22.132 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.421 Initializing NVMe Controllers 00:36:25.421 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:25.421 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:25.421 Initialization complete. Launching workers. 00:36:25.421 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79774, failed: 0 00:36:25.421 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20098, failed to submit 59676 00:36:25.421 success 0, unsuccess 20098, failed 0 00:36:25.421 10:57:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.421 10:57:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.421 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.708 Initializing NVMe Controllers 00:36:28.708 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.708 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.708 Initialization complete. Launching workers. 00:36:28.708 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77384, failed: 0 00:36:28.708 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19334, failed to submit 58050 00:36:28.708 success 0, unsuccess 19334, failed 0 00:36:28.708 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:28.708 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:28.708 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:28.708 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:28.709 10:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:29.278 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:29.278 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:29.278 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:30.218 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:36:30.218 00:36:30.218 real 0m13.813s 00:36:30.218 user 0m6.609s 00:36:30.218 sys 0m2.898s 00:36:30.218 10:57:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:30.218 10:57:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.218 ************************************ 00:36:30.218 END TEST kernel_target_abort 00:36:30.218 ************************************ 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:30.218 rmmod nvme_tcp 00:36:30.218 rmmod nvme_fabrics 00:36:30.218 rmmod nvme_keyring 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3966893 ']' 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3966893 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3966893 ']' 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3966893 00:36:30.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3966893) - No such process 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3966893 is not found' 00:36:30.218 Process with pid 3966893 is not found 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:30.218 10:57:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:31.155 Waiting for block devices as requested 00:36:31.156 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:36:31.415 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:31.415 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:31.415 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:31.676 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:31.676 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:31.676 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:31.676 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:31.936 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:31.936 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:31.936 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:31.936 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:32.194 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:32.194 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:32.194 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:32.194 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:32.453 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:32.453 10:57:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.360 10:57:22 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:34.360 00:36:34.360 real 0m36.315s 00:36:34.360 user 1m1.449s 00:36:34.360 sys 0m8.208s 00:36:34.360 10:57:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:34.360 10:57:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.360 ************************************ 00:36:34.360 END TEST nvmf_abort_qd_sizes 00:36:34.360 ************************************ 00:36:34.620 10:57:22 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:34.620 10:57:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:34.620 10:57:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:34.620 10:57:22 -- common/autotest_common.sh@10 -- # set +x 00:36:34.620 ************************************ 00:36:34.620 START TEST keyring_file 00:36:34.620 ************************************ 00:36:34.620 10:57:22 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:34.620 * Looking for test storage... 00:36:34.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:34.620 10:57:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:34.620 10:57:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.620 10:57:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.621 10:57:22 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.621 10:57:22 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.621 10:57:22 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.621 10:57:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.621 10:57:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.621 10:57:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.621 10:57:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:34.621 10:57:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:34.621 10:57:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XLW8KVmOVI 00:36:34.621 10:57:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:34.621 10:57:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XLW8KVmOVI 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XLW8KVmOVI 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XLW8KVmOVI 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Np3CDBLfF1 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:34.621 10:57:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Np3CDBLfF1 00:36:34.621 10:57:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Np3CDBLfF1 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Np3CDBLfF1 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=3971970 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:34.621 10:57:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3971970 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3971970 ']' 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:34.621 10:57:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.881 [2024-07-23 10:57:23.124242] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:34.881 [2024-07-23 10:57:23.124336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971970 ] 00:36:34.881 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.882 [2024-07-23 10:57:23.188779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.882 [2024-07-23 10:57:23.279920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:35.142 10:57:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:35.142 [2024-07-23 10:57:23.505184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.142 null0 00:36:35.142 [2024-07-23 10:57:23.537201] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:35.142 [2024-07-23 10:57:23.537652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:35.142 [2024-07-23 10:57:23.545245] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.142 10:57:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:35.142 [2024-07-23 10:57:23.557245] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:35.142 request: 00:36:35.142 { 00:36:35.142 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.142 "secure_channel": false, 00:36:35.142 "listen_address": { 00:36:35.142 "trtype": "tcp", 00:36:35.142 "traddr": "127.0.0.1", 00:36:35.142 "trsvcid": "4420" 00:36:35.142 }, 00:36:35.142 "method": "nvmf_subsystem_add_listener", 00:36:35.142 "req_id": 1 00:36:35.142 } 00:36:35.142 Got JSON-RPC error response 00:36:35.142 response: 00:36:35.142 { 00:36:35.142 "code": -32602, 00:36:35.142 "message": "Invalid parameters" 00:36:35.142 } 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:35.142 10:57:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=3972027 00:36:35.142 10:57:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3972027 /var/tmp/bperf.sock 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3972027 ']' 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.142 10:57:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:35.142 10:57:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:35.142 [2024-07-23 10:57:23.608209] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:35.142 [2024-07-23 10:57:23.608309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972027 ] 00:36:35.142 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.401 [2024-07-23 10:57:23.669150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.401 [2024-07-23 10:57:23.759736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.401 10:57:23 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:35.401 10:57:23 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:35.401 10:57:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:35.401 10:57:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:35.659 10:57:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Np3CDBLfF1 00:36:35.659 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Np3CDBLfF1 00:36:36.254 10:57:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:36.254 10:57:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:36.254 10:57:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.XLW8KVmOVI == \/\t\m\p\/\t\m\p\.\X\L\W\8\K\V\m\O\V\I ]] 00:36:36.254 10:57:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:36.254 10:57:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.254 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:36.820 10:57:25 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Np3CDBLfF1 == \/\t\m\p\/\t\m\p\.\N\p\3\C\D\B\L\f\F\1 ]] 00:36:36.820 10:57:25 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:36.820 10:57:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:36.820 10:57:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.820 10:57:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.820 10:57:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.820 10:57:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.079 10:57:25 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:37.079 10:57:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:37.079 10:57:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:37.079 10:57:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.079 10:57:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.079 10:57:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.079 10:57:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:37.337 10:57:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:37.337 10:57:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.337 10:57:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.595 [2024-07-23 10:57:25.925385] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:37.595 nvme0n1 00:36:37.595 10:57:26 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:37.595 10:57:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:37.595 10:57:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.595 10:57:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.595 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.595 10:57:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.853 10:57:26 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:37.853 10:57:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:37.853 10:57:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:37.853 10:57:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.853 10:57:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.853 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.853 10:57:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.111 10:57:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:38.111 10:57:26 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:38.369 Running I/O for 1 seconds... 00:36:39.303 00:36:39.303 Latency(us) 00:36:39.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.303 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:39.303 nvme0n1 : 1.01 8550.84 33.40 0.00 0.00 14896.32 5461.33 22622.06 00:36:39.303 =================================================================================================================== 00:36:39.303 Total : 8550.84 33.40 0.00 0.00 14896.32 5461.33 22622.06 00:36:39.303 0 00:36:39.303 10:57:27 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:39.303 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:39.560 10:57:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:39.560 10:57:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:39.560 10:57:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.560 10:57:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.560 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.560 10:57:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.126 10:57:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:40.126 10:57:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:40.126 10:57:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:40.126 10:57:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.126 10:57:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.126 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.126 10:57:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:40.384 10:57:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:40.384 10:57:28 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.384 10:57:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:40.385 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:40.643 [2024-07-23 10:57:28.929615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:40.643 [2024-07-23 10:57:28.930167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8b190 (107): Transport endpoint is not connected 00:36:40.643 [2024-07-23 10:57:28.931158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8b190 (9): Bad file descriptor 00:36:40.643 [2024-07-23 10:57:28.932157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:40.643 [2024-07-23 10:57:28.932178] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:40.643 [2024-07-23 10:57:28.932193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:40.643 request: 00:36:40.643 { 00:36:40.643 "name": "nvme0", 00:36:40.643 "trtype": "tcp", 00:36:40.643 "traddr": "127.0.0.1", 00:36:40.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.643 "adrfam": "ipv4", 00:36:40.643 "trsvcid": "4420", 00:36:40.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.643 "psk": "key1", 00:36:40.643 "method": "bdev_nvme_attach_controller", 00:36:40.643 "req_id": 1 00:36:40.643 } 00:36:40.643 Got JSON-RPC error response 00:36:40.643 response: 00:36:40.643 { 00:36:40.643 "code": -5, 00:36:40.643 "message": "Input/output error" 00:36:40.643 } 00:36:40.643 10:57:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:40.643 10:57:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.643 10:57:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.643 10:57:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.643 10:57:28 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:40.643 10:57:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.643 10:57:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.643 10:57:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.643 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.643 10:57:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.901 10:57:29 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:40.901 10:57:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:40.901 10:57:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:40.901 10:57:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.901 10:57:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.901 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.901 10:57:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.160 10:57:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:41.160 10:57:29 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:41.160 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:41.418 10:57:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:41.419 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:41.677 10:57:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:41.677 10:57:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:41.677 10:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.244 10:57:30 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:42.244 10:57:30 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.XLW8KVmOVI 00:36:42.244 10:57:30 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:42.244 10:57:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.244 10:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.244 [2024-07-23 10:57:30.746799] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XLW8KVmOVI': 0100660 00:36:42.244 [2024-07-23 10:57:30.746842] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:42.503 request: 00:36:42.503 { 00:36:42.503 "name": "key0", 00:36:42.503 "path": "/tmp/tmp.XLW8KVmOVI", 00:36:42.503 "method": "keyring_file_add_key", 00:36:42.503 "req_id": 1 00:36:42.503 } 00:36:42.503 Got JSON-RPC error response 00:36:42.503 response: 00:36:42.503 { 00:36:42.503 "code": -1, 00:36:42.503 "message": "Operation not permitted" 00:36:42.503 } 00:36:42.503 10:57:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:42.503 10:57:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:42.503 10:57:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:42.503 10:57:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:42.503 10:57:30 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.XLW8KVmOVI 00:36:42.503 10:57:30 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.503 10:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XLW8KVmOVI 00:36:42.761 10:57:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.XLW8KVmOVI 00:36:42.761 10:57:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:42.761 10:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.761 10:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.761 10:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.761 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.761 10:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.019 10:57:31 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:43.019 10:57:31 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:43.019 10:57:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.019 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.277 [2024-07-23 10:57:31.572992] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XLW8KVmOVI': No such file or directory 00:36:43.277 [2024-07-23 10:57:31.573033] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:43.277 [2024-07-23 10:57:31.573067] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:43.277 [2024-07-23 10:57:31.573081] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:43.277 [2024-07-23 10:57:31.573094] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:43.277 request: 00:36:43.277 { 00:36:43.277 "name": "nvme0", 00:36:43.277 "trtype": "tcp", 00:36:43.277 "traddr": "127.0.0.1", 00:36:43.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.277 "adrfam": "ipv4", 00:36:43.277 "trsvcid": "4420", 00:36:43.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.277 "psk": "key0", 00:36:43.277 "method": "bdev_nvme_attach_controller", 00:36:43.277 "req_id": 1 00:36:43.277 } 00:36:43.277 Got JSON-RPC error response 00:36:43.277 response: 00:36:43.277 { 00:36:43.277 "code": -19, 00:36:43.277 "message": "No such device" 00:36:43.277 } 00:36:43.277 10:57:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:43.277 10:57:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:43.277 10:57:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:43.277 10:57:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:43.277 10:57:31 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:43.277 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:43.535 10:57:31 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:43.535 10:57:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:43.535 10:57:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:43.535 10:57:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:43.535 10:57:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:43.535 10:57:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:43.536 10:57:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8EdmCrkelU 00:36:43.536 10:57:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:43.536 10:57:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:43.536 10:57:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8EdmCrkelU 00:36:43.536 10:57:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8EdmCrkelU 00:36:43.536 10:57:31 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8EdmCrkelU 00:36:43.536 10:57:31 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8EdmCrkelU 00:36:43.536 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8EdmCrkelU 00:36:43.793 10:57:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.793 10:57:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:44.051 nvme0n1 00:36:44.051 10:57:32 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:44.051 10:57:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.051 10:57:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.051 10:57:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.051 10:57:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.051 10:57:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.309 10:57:32 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:44.309 10:57:32 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:44.309 10:57:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:44.599 10:57:32 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:44.599 10:57:32 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:44.599 10:57:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.599 10:57:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.599 10:57:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.879 10:57:33 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:44.879 10:57:33 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:44.879 10:57:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.879 10:57:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.879 10:57:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.879 10:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.879 10:57:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.136 10:57:33 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:45.136 10:57:33 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:45.136 10:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:45.394 10:57:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:45.394 10:57:33 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:45.394 10:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.652 10:57:33 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:45.652 10:57:33 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8EdmCrkelU 00:36:45.652 10:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8EdmCrkelU 00:36:45.912 10:57:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Np3CDBLfF1 00:36:45.912 10:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Np3CDBLfF1 00:36:45.912 10:57:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.912 10:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.478 nvme0n1 00:36:46.478 10:57:34 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:46.478 10:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:46.737 10:57:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:46.737 "subsystems": [ 00:36:46.737 { 00:36:46.737 "subsystem": "keyring", 00:36:46.737 "config": [ 00:36:46.737 { 00:36:46.737 "method": "keyring_file_add_key", 00:36:46.737 "params": { 00:36:46.737 "name": "key0", 00:36:46.737 "path": "/tmp/tmp.8EdmCrkelU" 00:36:46.737 } 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "method": "keyring_file_add_key", 00:36:46.737 "params": { 00:36:46.737 "name": "key1", 00:36:46.737 "path": "/tmp/tmp.Np3CDBLfF1" 00:36:46.737 } 00:36:46.737 } 00:36:46.737 ] 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "subsystem": "iobuf", 00:36:46.737 "config": [ 00:36:46.737 { 00:36:46.737 "method": "iobuf_set_options", 00:36:46.737 "params": { 00:36:46.737 "small_pool_count": 8192, 00:36:46.737 "large_pool_count": 1024, 00:36:46.737 "small_bufsize": 8192, 00:36:46.737 "large_bufsize": 135168 00:36:46.737 } 00:36:46.737 } 00:36:46.737 ] 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "subsystem": "sock", 00:36:46.737 "config": [ 00:36:46.737 { 00:36:46.737 "method": "sock_set_default_impl", 00:36:46.737 "params": { 00:36:46.737 "impl_name": "posix" 00:36:46.737 } 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "method": "sock_impl_set_options", 00:36:46.737 "params": { 00:36:46.737 "impl_name": "ssl", 00:36:46.737 "recv_buf_size": 4096, 00:36:46.737 "send_buf_size": 4096, 00:36:46.737 "enable_recv_pipe": true, 00:36:46.737 "enable_quickack": false, 00:36:46.737 "enable_placement_id": 0, 00:36:46.737 "enable_zerocopy_send_server": true, 00:36:46.737 "enable_zerocopy_send_client": false, 00:36:46.737 "zerocopy_threshold": 0, 00:36:46.737 "tls_version": 0, 00:36:46.737 "enable_ktls": false 00:36:46.737 } 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "method": "sock_impl_set_options", 00:36:46.737 "params": { 00:36:46.737 "impl_name": "posix", 00:36:46.737 "recv_buf_size": 2097152, 00:36:46.737 "send_buf_size": 2097152, 00:36:46.737 "enable_recv_pipe": true, 00:36:46.737 "enable_quickack": false, 00:36:46.737 "enable_placement_id": 0, 00:36:46.737 "enable_zerocopy_send_server": true, 00:36:46.737 "enable_zerocopy_send_client": false, 00:36:46.737 "zerocopy_threshold": 0, 00:36:46.737 "tls_version": 0, 00:36:46.737 "enable_ktls": false 00:36:46.737 } 00:36:46.737 } 00:36:46.737 ] 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "subsystem": "vmd", 00:36:46.737 "config": [] 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "subsystem": "accel", 00:36:46.737 "config": [ 00:36:46.737 { 00:36:46.737 "method": "accel_set_options", 00:36:46.737 "params": { 00:36:46.737 "small_cache_size": 128, 00:36:46.737 "large_cache_size": 16, 00:36:46.737 "task_count": 2048, 00:36:46.737 "sequence_count": 2048, 00:36:46.737 "buf_count": 2048 00:36:46.737 } 00:36:46.737 } 00:36:46.737 ] 00:36:46.737 }, 00:36:46.737 { 00:36:46.737 "subsystem": "bdev", 00:36:46.737 "config": [ 00:36:46.738 { 00:36:46.738 "method": "bdev_set_options", 00:36:46.738 "params": { 00:36:46.738 "bdev_io_pool_size": 65535, 00:36:46.738 "bdev_io_cache_size": 256, 00:36:46.738 "bdev_auto_examine": true, 00:36:46.738 "iobuf_small_cache_size": 128, 00:36:46.738 "iobuf_large_cache_size": 16 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_raid_set_options", 00:36:46.738 "params": { 00:36:46.738 "process_window_size_kb": 1024 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_iscsi_set_options", 00:36:46.738 "params": { 00:36:46.738 "timeout_sec": 30 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_nvme_set_options", 00:36:46.738 "params": { 00:36:46.738 "action_on_timeout": "none", 00:36:46.738 "timeout_us": 0, 00:36:46.738 "timeout_admin_us": 0, 00:36:46.738 "keep_alive_timeout_ms": 10000, 00:36:46.738 "arbitration_burst": 0, 00:36:46.738 "low_priority_weight": 0, 00:36:46.738 "medium_priority_weight": 0, 00:36:46.738 "high_priority_weight": 0, 00:36:46.738 "nvme_adminq_poll_period_us": 10000, 00:36:46.738 "nvme_ioq_poll_period_us": 0, 00:36:46.738 "io_queue_requests": 512, 00:36:46.738 "delay_cmd_submit": true, 00:36:46.738 "transport_retry_count": 4, 00:36:46.738 "bdev_retry_count": 3, 00:36:46.738 "transport_ack_timeout": 0, 00:36:46.738 "ctrlr_loss_timeout_sec": 0, 00:36:46.738 "reconnect_delay_sec": 0, 00:36:46.738 "fast_io_fail_timeout_sec": 0, 00:36:46.738 "disable_auto_failback": false, 00:36:46.738 "generate_uuids": false, 00:36:46.738 "transport_tos": 0, 00:36:46.738 "nvme_error_stat": false, 00:36:46.738 "rdma_srq_size": 0, 00:36:46.738 "io_path_stat": false, 00:36:46.738 "allow_accel_sequence": false, 00:36:46.738 "rdma_max_cq_size": 0, 00:36:46.738 "rdma_cm_event_timeout_ms": 0, 00:36:46.738 "dhchap_digests": [ 00:36:46.738 "sha256", 00:36:46.738 "sha384", 00:36:46.738 "sha512" 00:36:46.738 ], 00:36:46.738 "dhchap_dhgroups": [ 00:36:46.738 "null", 00:36:46.738 "ffdhe2048", 00:36:46.738 "ffdhe3072", 00:36:46.738 "ffdhe4096", 00:36:46.738 "ffdhe6144", 00:36:46.738 "ffdhe8192" 00:36:46.738 ] 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_nvme_attach_controller", 00:36:46.738 "params": { 00:36:46.738 "name": "nvme0", 00:36:46.738 "trtype": "TCP", 00:36:46.738 "adrfam": "IPv4", 00:36:46.738 "traddr": "127.0.0.1", 00:36:46.738 "trsvcid": "4420", 00:36:46.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.738 "prchk_reftag": false, 00:36:46.738 "prchk_guard": false, 00:36:46.738 "ctrlr_loss_timeout_sec": 0, 00:36:46.738 "reconnect_delay_sec": 0, 00:36:46.738 "fast_io_fail_timeout_sec": 0, 00:36:46.738 "psk": "key0", 00:36:46.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.738 "hdgst": false, 00:36:46.738 "ddgst": false 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_nvme_set_hotplug", 00:36:46.738 "params": { 00:36:46.738 "period_us": 100000, 00:36:46.738 "enable": false 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "bdev_wait_for_examine" 00:36:46.738 } 00:36:46.738 ] 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "subsystem": "nbd", 00:36:46.738 "config": [] 00:36:46.738 } 00:36:46.738 ] 00:36:46.738 }' 00:36:46.738 10:57:35 keyring_file -- keyring/file.sh@114 -- # killprocess 3972027 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3972027 ']' 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3972027 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3972027 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3972027' 00:36:46.738 killing process with pid 3972027 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@965 -- # kill 3972027 00:36:46.738 Received shutdown signal, test time was about 1.000000 seconds 00:36:46.738 00:36:46.738 Latency(us) 00:36:46.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.738 =================================================================================================================== 00:36:46.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@970 -- # wait 3972027 00:36:46.738 10:57:35 keyring_file -- keyring/file.sh@117 -- # bperfpid=3973258 00:36:46.738 10:57:35 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3973258 /var/tmp/bperf.sock 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3973258 ']' 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:46.738 10:57:35 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:46.738 10:57:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:46.738 10:57:35 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:46.738 "subsystems": [ 00:36:46.738 { 00:36:46.738 "subsystem": "keyring", 00:36:46.738 "config": [ 00:36:46.738 { 00:36:46.738 "method": "keyring_file_add_key", 00:36:46.738 "params": { 00:36:46.738 "name": "key0", 00:36:46.738 "path": "/tmp/tmp.8EdmCrkelU" 00:36:46.738 } 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "method": "keyring_file_add_key", 00:36:46.738 "params": { 00:36:46.738 "name": "key1", 00:36:46.738 "path": "/tmp/tmp.Np3CDBLfF1" 00:36:46.738 } 00:36:46.738 } 00:36:46.738 ] 00:36:46.738 }, 00:36:46.738 { 00:36:46.738 "subsystem": "iobuf", 00:36:46.738 "config": [ 00:36:46.738 { 00:36:46.738 "method": "iobuf_set_options", 00:36:46.738 "params": { 00:36:46.738 "small_pool_count": 8192, 00:36:46.738 "large_pool_count": 1024, 00:36:46.738 "small_bufsize": 8192, 00:36:46.739 "large_bufsize": 135168 00:36:46.739 } 00:36:46.739 } 00:36:46.739 ] 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "subsystem": "sock", 00:36:46.739 "config": [ 00:36:46.739 { 00:36:46.739 "method": "sock_set_default_impl", 00:36:46.739 "params": { 00:36:46.739 "impl_name": "posix" 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "sock_impl_set_options", 00:36:46.739 "params": { 00:36:46.739 "impl_name": "ssl", 00:36:46.739 "recv_buf_size": 4096, 00:36:46.739 "send_buf_size": 4096, 00:36:46.739 "enable_recv_pipe": true, 00:36:46.739 "enable_quickack": false, 00:36:46.739 "enable_placement_id": 0, 00:36:46.739 "enable_zerocopy_send_server": true, 00:36:46.739 "enable_zerocopy_send_client": false, 00:36:46.739 "zerocopy_threshold": 0, 00:36:46.739 "tls_version": 0, 00:36:46.739 "enable_ktls": false 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "sock_impl_set_options", 00:36:46.739 "params": { 00:36:46.739 "impl_name": "posix", 00:36:46.739 "recv_buf_size": 2097152, 00:36:46.739 "send_buf_size": 2097152, 00:36:46.739 "enable_recv_pipe": true, 00:36:46.739 "enable_quickack": false, 00:36:46.739 "enable_placement_id": 0, 00:36:46.739 "enable_zerocopy_send_server": true, 00:36:46.739 "enable_zerocopy_send_client": false, 00:36:46.739 "zerocopy_threshold": 0, 00:36:46.739 "tls_version": 0, 00:36:46.739 "enable_ktls": false 00:36:46.739 } 00:36:46.739 } 00:36:46.739 ] 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "subsystem": "vmd", 00:36:46.739 "config": [] 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "subsystem": "accel", 00:36:46.739 "config": [ 00:36:46.739 { 00:36:46.739 "method": "accel_set_options", 00:36:46.739 "params": { 00:36:46.739 "small_cache_size": 128, 00:36:46.739 "large_cache_size": 16, 00:36:46.739 "task_count": 2048, 00:36:46.739 "sequence_count": 2048, 00:36:46.739 "buf_count": 2048 00:36:46.739 } 00:36:46.739 } 00:36:46.739 ] 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "subsystem": "bdev", 00:36:46.739 "config": [ 00:36:46.739 { 00:36:46.739 "method": "bdev_set_options", 00:36:46.739 "params": { 00:36:46.739 "bdev_io_pool_size": 65535, 00:36:46.739 "bdev_io_cache_size": 256, 00:36:46.739 "bdev_auto_examine": true, 00:36:46.739 "iobuf_small_cache_size": 128, 00:36:46.739 "iobuf_large_cache_size": 16 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_raid_set_options", 00:36:46.739 "params": { 00:36:46.739 "process_window_size_kb": 1024 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_iscsi_set_options", 00:36:46.739 "params": { 00:36:46.739 "timeout_sec": 30 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_nvme_set_options", 00:36:46.739 "params": { 00:36:46.739 "action_on_timeout": "none", 00:36:46.739 "timeout_us": 0, 00:36:46.739 "timeout_admin_us": 0, 00:36:46.739 "keep_alive_timeout_ms": 10000, 00:36:46.739 "arbitration_burst": 0, 00:36:46.739 "low_priority_weight": 0, 00:36:46.739 "medium_priority_weight": 0, 00:36:46.739 "high_priority_weight": 0, 00:36:46.739 "nvme_adminq_poll_period_us": 10000, 00:36:46.739 "nvme_ioq_poll_period_us": 0, 00:36:46.739 "io_queue_requests": 512, 00:36:46.739 "delay_cmd_submit": true, 00:36:46.739 "transport_retry_count": 4, 00:36:46.739 "bdev_retry_count": 3, 00:36:46.739 "transport_ack_timeout": 0, 00:36:46.739 "ctrlr_loss_timeout_sec": 0, 00:36:46.739 "reconnect_delay_sec": 0, 00:36:46.739 "fast_io_fail_timeout_sec": 0, 00:36:46.739 "disable_auto_failback": false, 00:36:46.739 "generate_uuids": false, 00:36:46.739 "transport_tos": 0, 00:36:46.739 "nvme_error_stat": false, 00:36:46.739 "rdma_srq_size": 0, 00:36:46.739 "io_path_stat": false, 00:36:46.739 "allow_accel_sequence": false, 00:36:46.739 "rdma_max_cq_size": 0, 00:36:46.739 "rdma_cm_event_timeout_ms": 0, 00:36:46.739 "dhchap_digests": [ 00:36:46.739 "sha256", 00:36:46.739 "sha384", 00:36:46.739 "sha512" 00:36:46.739 ], 00:36:46.739 "dhchap_dhgroups": [ 00:36:46.739 "null", 00:36:46.739 "ffdhe2048", 00:36:46.739 "ffdhe3072", 00:36:46.739 "ffdhe4096", 00:36:46.739 "ffdhe6144", 00:36:46.739 "ffdhe8192" 00:36:46.739 ] 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_nvme_attach_controller", 00:36:46.739 "params": { 00:36:46.739 "name": "nvme0", 00:36:46.739 "trtype": "TCP", 00:36:46.739 "adrfam": "IPv4", 00:36:46.739 "traddr": "127.0.0.1", 00:36:46.739 "trsvcid": "4420", 00:36:46.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.739 "prchk_reftag": false, 00:36:46.739 "prchk_guard": false, 00:36:46.739 "ctrlr_loss_timeout_sec": 0, 00:36:46.739 "reconnect_delay_sec": 0, 00:36:46.739 "fast_io_fail_timeout_sec": 0, 00:36:46.739 "psk": "key0", 00:36:46.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.739 "hdgst": false, 00:36:46.739 "ddgst": false 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_nvme_set_hotplug", 00:36:46.739 "params": { 00:36:46.739 "period_us": 100000, 00:36:46.739 "enable": false 00:36:46.739 } 00:36:46.739 }, 00:36:46.739 { 00:36:46.739 "method": "bdev_wait_for_examine" 00:36:46.739 } 00:36:46.739 ] 00:36:46.740 }, 00:36:46.740 { 00:36:46.740 "subsystem": "nbd", 00:36:46.740 "config": [] 00:36:46.740 } 00:36:46.740 ] 00:36:46.740 }' 00:36:47.000 [2024-07-23 10:57:35.253219] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:47.000 [2024-07-23 10:57:35.253311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973258 ] 00:36:47.000 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.000 [2024-07-23 10:57:35.312613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.000 [2024-07-23 10:57:35.401123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.260 [2024-07-23 10:57:35.575269] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:47.828 10:57:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:47.828 10:57:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:47.828 10:57:36 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:47.828 10:57:36 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:47.828 10:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.087 10:57:36 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:48.087 10:57:36 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:48.087 10:57:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.087 10:57:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.087 10:57:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.087 10:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.087 10:57:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.346 10:57:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:48.346 10:57:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:48.346 10:57:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:48.346 10:57:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.346 10:57:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.346 10:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.346 10:57:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:48.604 10:57:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:48.604 10:57:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:48.604 10:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:48.604 10:57:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:48.862 10:57:37 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:48.862 10:57:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:48.862 10:57:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8EdmCrkelU /tmp/tmp.Np3CDBLfF1 00:36:48.862 10:57:37 keyring_file -- keyring/file.sh@20 -- # killprocess 3973258 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3973258 ']' 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3973258 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3973258 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:48.862 10:57:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:48.863 10:57:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3973258' 00:36:48.863 killing process with pid 3973258 00:36:48.863 10:57:37 keyring_file -- common/autotest_common.sh@965 -- # kill 3973258 00:36:48.863 Received shutdown signal, test time was about 1.000000 seconds 00:36:48.863 00:36:48.863 Latency(us) 00:36:48.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.863 =================================================================================================================== 00:36:48.863 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:48.863 10:57:37 keyring_file -- common/autotest_common.sh@970 -- # wait 3973258 00:36:49.122 10:57:37 keyring_file -- keyring/file.sh@21 -- # killprocess 3971970 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3971970 ']' 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3971970 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3971970 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3971970' 00:36:49.122 killing process with pid 3971970 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@965 -- # kill 3971970 00:36:49.122 [2024-07-23 10:57:37.532747] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:49.122 10:57:37 keyring_file -- common/autotest_common.sh@970 -- # wait 3971970 00:36:49.382 00:36:49.382 real 0m14.914s 00:36:49.382 user 0m38.344s 00:36:49.382 sys 0m3.264s 00:36:49.382 10:57:37 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:49.382 10:57:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:49.382 ************************************ 00:36:49.382 END TEST keyring_file 00:36:49.382 ************************************ 00:36:49.382 10:57:37 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:49.382 10:57:37 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:49.382 10:57:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:49.382 10:57:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:49.383 10:57:37 -- common/autotest_common.sh@10 -- # set +x 00:36:49.383 ************************************ 00:36:49.383 START TEST keyring_linux 00:36:49.383 ************************************ 00:36:49.383 10:57:37 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:49.642 * Looking for test storage... 00:36:49.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:49.642 10:57:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:49.642 10:57:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:49.642 10:57:37 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:49.642 10:57:37 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.642 10:57:37 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.643 10:57:37 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.643 10:57:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.643 10:57:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.643 10:57:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.643 10:57:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:49.643 10:57:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:49.643 /tmp/:spdk-test:key0 00:36:49.643 10:57:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:49.643 10:57:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:49.643 10:57:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:49.643 10:57:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:49.643 10:57:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:49.643 /tmp/:spdk-test:key1 00:36:49.643 10:57:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3973558 00:36:49.643 10:57:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:49.643 10:57:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3973558 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3973558 ']' 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:49.643 10:57:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:49.643 [2024-07-23 10:57:38.080615] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:49.643 [2024-07-23 10:57:38.080726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973558 ] 00:36:49.643 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.643 [2024-07-23 10:57:38.140885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.902 [2024-07-23 10:57:38.228665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.161 10:57:38 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:50.161 10:57:38 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:50.161 10:57:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:50.161 10:57:38 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.161 10:57:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:50.161 [2024-07-23 10:57:38.455166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.161 null0 00:36:50.161 [2024-07-23 10:57:38.487201] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:50.161 [2024-07-23 10:57:38.487622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:50.161 10:57:38 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.161 10:57:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:50.161 934608475 00:36:50.161 10:57:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:50.161 756276581 00:36:50.162 10:57:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3973657 00:36:50.162 10:57:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3973657 /var/tmp/bperf.sock 00:36:50.162 10:57:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3973657 ']' 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:50.162 10:57:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:50.162 [2024-07-23 10:57:38.554991] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:50.162 [2024-07-23 10:57:38.555084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973657 ] 00:36:50.162 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.162 [2024-07-23 10:57:38.615267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.420 [2024-07-23 10:57:38.702941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.420 10:57:38 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:50.420 10:57:38 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:50.420 10:57:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:50.420 10:57:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:50.679 10:57:39 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:50.679 10:57:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:51.250 10:57:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:51.250 10:57:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:51.250 [2024-07-23 10:57:39.745592] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.510 nvme0n1 00:36:51.510 10:57:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:51.510 10:57:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:51.510 10:57:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:51.510 10:57:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:51.510 10:57:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:51.510 10:57:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.769 10:57:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:51.769 10:57:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:51.769 10:57:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:51.769 10:57:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:51.769 10:57:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.769 10:57:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.769 10:57:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@25 -- # sn=934608475 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 934608475 == \9\3\4\6\0\8\4\7\5 ]] 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 934608475 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:52.028 10:57:40 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.289 Running I/O for 1 seconds... 00:36:53.230 00:36:53.230 Latency(us) 00:36:53.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:53.230 nvme0n1 : 1.01 9468.57 36.99 0.00 0.00 13411.53 5364.24 18932.62 00:36:53.230 =================================================================================================================== 00:36:53.230 Total : 9468.57 36.99 0.00 0.00 13411.53 5364.24 18932.62 00:36:53.230 0 00:36:53.230 10:57:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:53.230 10:57:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:53.489 10:57:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:53.489 10:57:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:53.489 10:57:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:53.489 10:57:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:53.489 10:57:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.489 10:57:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:53.748 10:57:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:53.748 10:57:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:53.748 10:57:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:53.748 10:57:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:53.748 10:57:42 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:53.748 10:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:54.006 [2024-07-23 10:57:42.483540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:54.006 [2024-07-23 10:57:42.483895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da0f0 (107): Transport endpoint is not connected 00:36:54.006 [2024-07-23 10:57:42.484887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da0f0 (9): Bad file descriptor 00:36:54.006 [2024-07-23 10:57:42.485888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:54.006 [2024-07-23 10:57:42.485909] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:54.006 [2024-07-23 10:57:42.485924] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:54.006 request: 00:36:54.006 { 00:36:54.006 "name": "nvme0", 00:36:54.006 "trtype": "tcp", 00:36:54.006 "traddr": "127.0.0.1", 00:36:54.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.006 "adrfam": "ipv4", 00:36:54.006 "trsvcid": "4420", 00:36:54.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.006 "psk": ":spdk-test:key1", 00:36:54.006 "method": "bdev_nvme_attach_controller", 00:36:54.006 "req_id": 1 00:36:54.006 } 00:36:54.006 Got JSON-RPC error response 00:36:54.006 response: 00:36:54.006 { 00:36:54.006 "code": -5, 00:36:54.006 "message": "Input/output error" 00:36:54.006 } 00:36:54.006 10:57:42 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:54.006 10:57:42 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:54.006 10:57:42 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:54.006 10:57:42 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@33 -- # sn=934608475 00:36:54.006 10:57:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 934608475 00:36:54.265 1 links removed 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@33 -- # sn=756276581 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 756276581 00:36:54.265 1 links removed 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3973657 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3973657 ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3973657 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3973657 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3973657' 00:36:54.265 killing process with pid 3973657 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@965 -- # kill 3973657 00:36:54.265 Received shutdown signal, test time was about 1.000000 seconds 00:36:54.265 00:36:54.265 Latency(us) 00:36:54.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.265 =================================================================================================================== 00:36:54.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@970 -- # wait 3973657 00:36:54.265 10:57:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3973558 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3973558 ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3973558 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3973558 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3973558' 00:36:54.265 killing process with pid 3973558 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@965 -- # kill 3973558 00:36:54.265 10:57:42 keyring_linux -- common/autotest_common.sh@970 -- # wait 3973558 00:36:54.524 00:36:54.524 real 0m5.113s 00:36:54.524 user 0m10.618s 00:36:54.524 sys 0m1.563s 00:36:54.524 10:57:42 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:54.524 10:57:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:54.524 ************************************ 00:36:54.524 END TEST keyring_linux 00:36:54.524 ************************************ 00:36:54.524 10:57:42 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:42 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:54.524 10:57:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:54.524 10:57:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:54.524 10:57:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:54.524 10:57:43 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:54.524 10:57:43 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:54.524 10:57:43 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:54.524 10:57:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:54.524 10:57:43 -- common/autotest_common.sh@10 -- # set +x 00:36:54.524 10:57:43 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:54.524 10:57:43 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:54.524 10:57:43 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:54.524 10:57:43 -- common/autotest_common.sh@10 -- # set +x 00:36:56.427 INFO: APP EXITING 00:36:56.427 INFO: killing all VMs 00:36:56.427 INFO: killing vhost app 00:36:56.427 WARN: no vhost pid file found 00:36:56.427 INFO: EXIT DONE 00:36:56.994 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:36:56.994 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:36:56.994 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:36:56.994 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:36:57.251 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:36:57.251 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:36:57.251 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:36:57.251 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:36:57.251 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:36:57.251 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:36:57.251 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:36:57.251 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:36:57.251 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:36:57.251 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:36:57.251 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:36:57.251 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:36:57.251 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:36:58.189 Cleaning 00:36:58.189 Removing: /var/run/dpdk/spdk0/config 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:58.189 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:58.189 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:58.189 Removing: /var/run/dpdk/spdk1/config 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:58.189 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:58.447 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:58.447 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:58.447 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:58.447 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:58.447 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:58.447 Removing: /var/run/dpdk/spdk2/config 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:58.447 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:58.447 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:58.447 Removing: /var/run/dpdk/spdk3/config 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:58.447 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:58.447 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:58.447 Removing: /var/run/dpdk/spdk4/config 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:58.447 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:58.447 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:58.447 Removing: /dev/shm/bdev_svc_trace.1 00:36:58.447 Removing: /dev/shm/nvmf_trace.0 00:36:58.447 Removing: /dev/shm/spdk_tgt_trace.pid3719153 00:36:58.447 Removing: /var/run/dpdk/spdk0 00:36:58.447 Removing: /var/run/dpdk/spdk1 00:36:58.447 Removing: /var/run/dpdk/spdk2 00:36:58.447 Removing: /var/run/dpdk/spdk3 00:36:58.447 Removing: /var/run/dpdk/spdk4 00:36:58.447 Removing: /var/run/dpdk/spdk_pid3717928 00:36:58.447 Removing: /var/run/dpdk/spdk_pid3718503 00:36:58.447 Removing: /var/run/dpdk/spdk_pid3719153 00:36:58.447 Removing: /var/run/dpdk/spdk_pid3719515 00:36:58.447 Removing: /var/run/dpdk/spdk_pid3720048 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3720068 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3720620 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3720714 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3720921 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3721954 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3722636 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3722831 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3722984 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3723158 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3723315 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3723440 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3723562 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3723791 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3724164 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3726303 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3726433 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3726563 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3726566 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727194 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727305 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727667 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727753 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727891 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3727984 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728116 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728132 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728434 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728605 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728809 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728933 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3728971 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729037 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729241 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729366 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729497 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729616 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729820 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3729949 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730071 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730201 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730403 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730528 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730654 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730775 00:36:58.448 Removing: /var/run/dpdk/spdk_pid3730984 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731115 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731235 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731378 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731572 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731704 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3731829 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3732034 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3732110 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3732278 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3733889 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3776059 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3777991 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3783457 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3785899 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3787715 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3788023 00:36:58.706 Removing: /var/run/dpdk/spdk_pid3793646 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3793728 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3794150 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3794643 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795141 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795443 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795453 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795640 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795685 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3795753 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3796249 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3796653 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3797152 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3797547 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3797559 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3797750 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3798558 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3799635 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3803807 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3804022 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3805965 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3808898 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3810555 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3815424 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3819397 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3820384 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3820906 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3829324 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3830939 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3853485 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3855636 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3856530 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3857521 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3857639 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3857741 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3857758 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3858095 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3859095 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3859649 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3859891 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3861112 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3861351 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3861788 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3863631 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3866134 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3869436 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3887204 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3889321 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3892856 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3893595 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3894463 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3896451 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3898189 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3901372 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3901457 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3903605 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3903707 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3903810 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3904023 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3904103 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3904920 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3905828 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3906792 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3907685 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3908661 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3909551 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3912480 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3912810 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3913797 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3914445 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3917889 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3919412 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3922031 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3924625 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3929694 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3933075 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3933077 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3943141 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3943534 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3943881 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3944545 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3945239 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3945551 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3945867 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3946237 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3948112 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3948217 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3951206 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3951266 00:36:58.707 Removing: /var/run/dpdk/spdk_pid3952602 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3956384 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3956472 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3958635 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3959700 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3960842 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3961410 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3962483 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3963146 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3967170 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3967430 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3967744 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3969055 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3969871 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3970090 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3971970 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3972027 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3973258 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3973558 00:36:58.966 Removing: /var/run/dpdk/spdk_pid3973657 00:36:58.966 Clean 00:36:58.966 10:57:47 -- common/autotest_common.sh@1447 -- # return 0 00:36:58.966 10:57:47 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:58.966 10:57:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.966 10:57:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.966 10:57:47 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:58.966 10:57:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.966 10:57:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.966 10:57:47 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:58.966 10:57:47 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:58.966 10:57:47 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:58.966 10:57:47 -- spdk/autotest.sh@391 -- # hash lcov 00:36:58.966 10:57:47 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:58.966 10:57:47 -- spdk/autotest.sh@393 -- # hostname 00:36:58.966 10:57:47 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:59.224 geninfo: WARNING: invalid characters removed from testname! 00:37:37.951 10:58:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:40.530 10:58:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.058 10:58:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.317 10:58:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.848 10:58:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:54.048 10:58:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.342 10:58:45 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:57.342 10:58:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.342 10:58:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:57.342 10:58:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.342 10:58:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.342 10:58:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.342 10:58:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.342 10:58:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.342 10:58:45 -- paths/export.sh@5 -- $ export PATH 00:37:57.342 10:58:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.342 10:58:45 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:57.342 10:58:45 -- common/autobuild_common.sh@440 -- $ date +%s 00:37:57.342 10:58:45 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721725125.XXXXXX 00:37:57.342 10:58:45 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721725125.VRjsVo 00:37:57.342 10:58:45 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:37:57.342 10:58:45 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:37:57.342 10:58:45 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:57.342 10:58:45 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:57.342 10:58:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:57.342 10:58:45 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:57.342 10:58:45 -- common/autobuild_common.sh@456 -- $ get_config_params 00:37:57.342 10:58:45 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:57.342 10:58:45 -- common/autotest_common.sh@10 -- $ set +x 00:37:57.342 10:58:45 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:57.342 10:58:45 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:37:57.342 10:58:45 -- pm/common@17 -- $ local monitor 00:37:57.342 10:58:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.342 10:58:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.342 10:58:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.342 10:58:45 -- pm/common@21 -- $ date +%s 00:37:57.342 10:58:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:57.342 10:58:45 -- pm/common@21 -- $ date +%s 00:37:57.342 10:58:45 -- pm/common@25 -- $ sleep 1 00:37:57.342 10:58:45 -- pm/common@21 -- $ date +%s 00:37:57.342 10:58:45 -- pm/common@21 -- $ date +%s 00:37:57.342 10:58:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721725125 00:37:57.342 10:58:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721725125 00:37:57.342 10:58:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721725125 00:37:57.342 10:58:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721725125 00:37:57.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721725125_collect-vmstat.pm.log 00:37:57.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721725125_collect-cpu-load.pm.log 00:37:57.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721725125_collect-cpu-temp.pm.log 00:37:57.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721725125_collect-bmc-pm.bmc.pm.log 00:37:58.282 10:58:46 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:37:58.282 10:58:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:37:58.282 10:58:46 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:58.282 10:58:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:58.282 10:58:46 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:58.282 10:58:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:58.282 10:58:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:58.282 10:58:46 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:58.282 10:58:46 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:58.282 10:58:46 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:58.282 10:58:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:58.282 10:58:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:58.282 10:58:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:58.282 10:58:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:58.282 10:58:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.282 10:58:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:58.282 10:58:46 -- pm/common@44 -- $ pid=3983908 00:37:58.282 10:58:46 -- pm/common@50 -- $ kill -TERM 3983908 00:37:58.282 10:58:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.282 10:58:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:58.282 10:58:46 -- pm/common@44 -- $ pid=3983910 00:37:58.282 10:58:46 -- pm/common@50 -- $ kill -TERM 3983910 00:37:58.282 10:58:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.282 10:58:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:58.282 10:58:46 -- pm/common@44 -- $ pid=3983912 00:37:58.282 10:58:46 -- pm/common@50 -- $ kill -TERM 3983912 00:37:58.282 10:58:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.282 10:58:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:58.282 10:58:46 -- pm/common@44 -- $ pid=3983942 00:37:58.282 10:58:46 -- pm/common@50 -- $ sudo -E kill -TERM 3983942 00:37:58.282 + [[ -n 3623159 ]] 00:37:58.282 + sudo kill 3623159 00:37:58.553 [Pipeline] } 00:37:58.573 [Pipeline] // stage 00:37:58.579 [Pipeline] } 00:37:58.598 [Pipeline] // timeout 00:37:58.605 [Pipeline] } 00:37:58.623 [Pipeline] // catchError 00:37:58.629 [Pipeline] } 00:37:58.647 [Pipeline] // wrap 00:37:58.655 [Pipeline] } 00:37:58.672 [Pipeline] // catchError 00:37:58.681 [Pipeline] stage 00:37:58.684 [Pipeline] { (Epilogue) 00:37:58.699 [Pipeline] catchError 00:37:58.701 [Pipeline] { 00:37:58.716 [Pipeline] echo 00:37:58.718 Cleanup processes 00:37:58.724 [Pipeline] sh 00:37:59.011 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.011 3984067 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:59.011 3984126 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.026 [Pipeline] sh 00:37:59.314 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.314 ++ grep -v 'sudo pgrep' 00:37:59.314 ++ awk '{print $1}' 00:37:59.314 + sudo kill -9 3984067 00:37:59.326 [Pipeline] sh 00:37:59.610 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:07.771 [Pipeline] sh 00:38:08.056 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:08.057 Artifacts sizes are good 00:38:08.074 [Pipeline] archiveArtifacts 00:38:08.082 Archiving artifacts 00:38:08.335 [Pipeline] sh 00:38:08.644 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:08.660 [Pipeline] cleanWs 00:38:08.671 [WS-CLEANUP] Deleting project workspace... 00:38:08.671 [WS-CLEANUP] Deferred wipeout is used... 00:38:08.679 [WS-CLEANUP] done 00:38:08.680 [Pipeline] } 00:38:08.704 [Pipeline] // catchError 00:38:08.716 [Pipeline] sh 00:38:09.000 + logger -p user.info -t JENKINS-CI 00:38:09.008 [Pipeline] } 00:38:09.023 [Pipeline] // stage 00:38:09.029 [Pipeline] } 00:38:09.046 [Pipeline] // node 00:38:09.053 [Pipeline] End of Pipeline 00:38:09.094 Finished: SUCCESS